MFH-Net:基于混合 CNN-Transformer 网络的多尺度融合医学图像分割技术

IF 3 4区 计算机科学 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC International Journal of Imaging Systems and Technology Pub Date : 2024-10-02 DOI:10.1002/ima.23192
Ying Wang, Meng Zhang, Jian'an Liang, Meiyan Liang
{"title":"MFH-Net:基于混合 CNN-Transformer 网络的多尺度融合医学图像分割技术","authors":"Ying Wang,&nbsp;Meng Zhang,&nbsp;Jian'an Liang,&nbsp;Meiyan Liang","doi":"10.1002/ima.23192","DOIUrl":null,"url":null,"abstract":"<div>\n \n <p>In recent years, U-Net and its variants have gained widespread use in medical image segmentation. One key aspect of U-Net's design is the skip connection, facilitating the retention of detailed information and leading to finer segmentation results. However, existing research often concentrates on enhancing either the encoder or decoder, neglecting the semantic gap between them, and resulting in suboptimal model performance. In response, we introduce Multi-Scale Fusion module aimed at enhancing the original skip connections and addressing the semantic gap. Our approach fully incorporates the correlation between outputs from adjacent encoder layers and facilitates bidirectional information exchange across multiple layers. Additionally, we introduce Channel Relation Perception module to guide the fused multi-scale information for efficient connection with decoder features. These two modules collectively bridge the semantic gap by capturing spatial and channel dependencies in the features, contributing to accurate medical image segmentation. Building upon these innovations, we propose a novel network called MFH-Net. On three publicly available datasets, ISIC2016, ISIC2017, and Kvasir-SEG, we perform a comprehensive evaluation of the network. The experimental results show that MFH-Net exhibits higher segmentation accuracy in comparison with other competing methods. Importantly, the modules we have devised can be seamlessly incorporated into various networks, such as U-Net and its variants, offering a potential avenue for further improving model performance.</p>\n </div>","PeriodicalId":14027,"journal":{"name":"International Journal of Imaging Systems and Technology","volume":"34 6","pages":""},"PeriodicalIF":3.0000,"publicationDate":"2024-10-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"MFH-Net: A Hybrid CNN-Transformer Network Based Multi-Scale Fusion for Medical Image Segmentation\",\"authors\":\"Ying Wang,&nbsp;Meng Zhang,&nbsp;Jian'an Liang,&nbsp;Meiyan Liang\",\"doi\":\"10.1002/ima.23192\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div>\\n \\n <p>In recent years, U-Net and its variants have gained widespread use in medical image segmentation. One key aspect of U-Net's design is the skip connection, facilitating the retention of detailed information and leading to finer segmentation results. However, existing research often concentrates on enhancing either the encoder or decoder, neglecting the semantic gap between them, and resulting in suboptimal model performance. In response, we introduce Multi-Scale Fusion module aimed at enhancing the original skip connections and addressing the semantic gap. Our approach fully incorporates the correlation between outputs from adjacent encoder layers and facilitates bidirectional information exchange across multiple layers. Additionally, we introduce Channel Relation Perception module to guide the fused multi-scale information for efficient connection with decoder features. These two modules collectively bridge the semantic gap by capturing spatial and channel dependencies in the features, contributing to accurate medical image segmentation. Building upon these innovations, we propose a novel network called MFH-Net. On three publicly available datasets, ISIC2016, ISIC2017, and Kvasir-SEG, we perform a comprehensive evaluation of the network. The experimental results show that MFH-Net exhibits higher segmentation accuracy in comparison with other competing methods. Importantly, the modules we have devised can be seamlessly incorporated into various networks, such as U-Net and its variants, offering a potential avenue for further improving model performance.</p>\\n </div>\",\"PeriodicalId\":14027,\"journal\":{\"name\":\"International Journal of Imaging Systems and Technology\",\"volume\":\"34 6\",\"pages\":\"\"},\"PeriodicalIF\":3.0000,\"publicationDate\":\"2024-10-02\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"International Journal of Imaging Systems and Technology\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://onlinelibrary.wiley.com/doi/10.1002/ima.23192\",\"RegionNum\":4,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"ENGINEERING, ELECTRICAL & ELECTRONIC\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"International Journal of Imaging Systems and Technology","FirstCategoryId":"94","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1002/ima.23192","RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"ENGINEERING, ELECTRICAL & ELECTRONIC","Score":null,"Total":0}
引用次数: 0

摘要

近年来,U-Net 及其变体在医学图像分割中得到了广泛应用。U-Net 设计的一个关键方面是跳转连接,这有利于保留详细信息,从而获得更精细的分割结果。然而,现有的研究往往集中于增强编码器或解码器,忽略了两者之间的语义差距,导致模型性能不理想。为此,我们引入了多尺度融合模块,旨在增强原始跳转连接并解决语义差距问题。我们的方法充分考虑了相邻编码器层输出之间的相关性,促进了多层之间的双向信息交换。此外,我们还引入了通道关系感知模块,引导融合后的多尺度信息与解码器特征进行有效连接。这两个模块通过捕捉特征中的空间和通道依赖关系,共同弥合了语义鸿沟,为准确的医学影像分割做出了贡献。在这些创新的基础上,我们提出了一种名为 MFH-Net 的新型网络。我们在 ISIC2016、ISIC2017 和 Kvasir-SEG 这三个公开数据集上对该网络进行了全面评估。实验结果表明,与其他竞争方法相比,MFH-Net 具有更高的分割准确性。重要的是,我们设计的模块可以无缝集成到 U-Net 及其变体等各种网络中,为进一步提高模型性能提供了潜在的途径。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
MFH-Net: A Hybrid CNN-Transformer Network Based Multi-Scale Fusion for Medical Image Segmentation

In recent years, U-Net and its variants have gained widespread use in medical image segmentation. One key aspect of U-Net's design is the skip connection, facilitating the retention of detailed information and leading to finer segmentation results. However, existing research often concentrates on enhancing either the encoder or decoder, neglecting the semantic gap between them, and resulting in suboptimal model performance. In response, we introduce Multi-Scale Fusion module aimed at enhancing the original skip connections and addressing the semantic gap. Our approach fully incorporates the correlation between outputs from adjacent encoder layers and facilitates bidirectional information exchange across multiple layers. Additionally, we introduce Channel Relation Perception module to guide the fused multi-scale information for efficient connection with decoder features. These two modules collectively bridge the semantic gap by capturing spatial and channel dependencies in the features, contributing to accurate medical image segmentation. Building upon these innovations, we propose a novel network called MFH-Net. On three publicly available datasets, ISIC2016, ISIC2017, and Kvasir-SEG, we perform a comprehensive evaluation of the network. The experimental results show that MFH-Net exhibits higher segmentation accuracy in comparison with other competing methods. Importantly, the modules we have devised can be seamlessly incorporated into various networks, such as U-Net and its variants, offering a potential avenue for further improving model performance.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
International Journal of Imaging Systems and Technology
International Journal of Imaging Systems and Technology 工程技术-成像科学与照相技术
CiteScore
6.90
自引率
6.10%
发文量
138
审稿时长
3 months
期刊介绍: The International Journal of Imaging Systems and Technology (IMA) is a forum for the exchange of ideas and results relevant to imaging systems, including imaging physics and informatics. The journal covers all imaging modalities in humans and animals. IMA accepts technically sound and scientifically rigorous research in the interdisciplinary field of imaging, including relevant algorithmic research and hardware and software development, and their applications relevant to medical research. The journal provides a platform to publish original research in structural and functional imaging. The journal is also open to imaging studies of the human body and on animals that describe novel diagnostic imaging and analyses methods. Technical, theoretical, and clinical research in both normal and clinical populations is encouraged. Submissions describing methods, software, databases, replication studies as well as negative results are also considered. The scope of the journal includes, but is not limited to, the following in the context of biomedical research: Imaging and neuro-imaging modalities: structural MRI, functional MRI, PET, SPECT, CT, ultrasound, EEG, MEG, NIRS etc.; Neuromodulation and brain stimulation techniques such as TMS and tDCS; Software and hardware for imaging, especially related to human and animal health; Image segmentation in normal and clinical populations; Pattern analysis and classification using machine learning techniques; Computational modeling and analysis; Brain connectivity and connectomics; Systems-level characterization of brain function; Neural networks and neurorobotics; Computer vision, based on human/animal physiology; Brain-computer interface (BCI) technology; Big data, databasing and data mining.
期刊最新文献
Predicting the Early Detection of Breast Cancer Using Hybrid Machine Learning Systems and Thermographic Imaging CATNet: A Cross Attention and Texture-Aware Network for Polyp Segmentation VMC-UNet: A Vision Mamba-CNN U-Net for Tumor Segmentation in Breast Ultrasound Image Suppression of the Tissue Component With the Total Least-Squares Algorithm to Improve Second Harmonic Imaging of Ultrasound Contrast Agents Segmentation and Classification of Breast Masses From the Whole Mammography Images Using Transfer Learning and BI-RADS Characteristics
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1