DAUNet: A deformable aggregation UNet for multi-organ 3D medical image segmentation

IF 3.9 3区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pattern Recognition Letters Pub Date : 2025-03-11 DOI:10.1016/j.patrec.2025.03.005
Qinghao Liu, Min Liu, Yuehao Zhu, Licheng Liu, Zhe Zhang, Yaonan Wang
{"title":"DAUNet: A deformable aggregation UNet for multi-organ 3D medical image segmentation","authors":"Qinghao Liu,&nbsp;Min Liu,&nbsp;Yuehao Zhu,&nbsp;Licheng Liu,&nbsp;Zhe Zhang,&nbsp;Yaonan Wang","doi":"10.1016/j.patrec.2025.03.005","DOIUrl":null,"url":null,"abstract":"<div><div>Medical image segmentation is a critical component of medical image analysis. However, due to the limitations in the size and shape of the receptive field, neural networks often struggle to adapt to segmentation targets of different sizes in 3D medical images. Furthermore, they may not be able to adequately model the intra-slice and inter-slice relationships in 3D medical images. To overcome these challenges, we propose a Deformable Aggregation UNet (DAUNet) for multi-organ segmentation in medical images. We introduce two specially designed modules into the UNet structure, which is composed of residual blocks. Specifically, the Deformable Aggregation Module (DAM) incorporates deformable receptive fields and gate fusion methods. This enhances the fusion of information from various hierarchical levels, enabling DAUNet to adapt to the segmentation of multiple organs or structures at different scales within the abdominal region. Simultaneously, the Fourier Attention Module (FAM) leverages Fourier convolution to enhance long-range dependencies and relationships within the entire 3D volume of medical images, accommodating the detailed structural variations in different directions. The network was assessed on three publicly available datasets (ACDC, BTCV, and BraTS-Africa2024), as well as a private clinical dataset for the TME-CT segmentation task. Compared to existing state-of-the-art (SOTA) methods, DAUNet achieves superior performance, with improvements of 0.96%, 2.52%, 1.60%, and 4.57% in average Dice scores across the four datasets, respectively.</div></div>","PeriodicalId":54638,"journal":{"name":"Pattern Recognition Letters","volume":"191 ","pages":"Pages 58-65"},"PeriodicalIF":3.9000,"publicationDate":"2025-03-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Pattern Recognition Letters","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0167865525000893","RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

Abstract

Medical image segmentation is a critical component of medical image analysis. However, due to the limitations in the size and shape of the receptive field, neural networks often struggle to adapt to segmentation targets of different sizes in 3D medical images. Furthermore, they may not be able to adequately model the intra-slice and inter-slice relationships in 3D medical images. To overcome these challenges, we propose a Deformable Aggregation UNet (DAUNet) for multi-organ segmentation in medical images. We introduce two specially designed modules into the UNet structure, which is composed of residual blocks. Specifically, the Deformable Aggregation Module (DAM) incorporates deformable receptive fields and gate fusion methods. This enhances the fusion of information from various hierarchical levels, enabling DAUNet to adapt to the segmentation of multiple organs or structures at different scales within the abdominal region. Simultaneously, the Fourier Attention Module (FAM) leverages Fourier convolution to enhance long-range dependencies and relationships within the entire 3D volume of medical images, accommodating the detailed structural variations in different directions. The network was assessed on three publicly available datasets (ACDC, BTCV, and BraTS-Africa2024), as well as a private clinical dataset for the TME-CT segmentation task. Compared to existing state-of-the-art (SOTA) methods, DAUNet achieves superior performance, with improvements of 0.96%, 2.52%, 1.60%, and 4.57% in average Dice scores across the four datasets, respectively.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
医学图像分割是医学图像分析的重要组成部分。然而,由于感受野的大小和形状的限制,神经网络往往难以适应三维医学图像中不同大小的分割目标。此外,它们可能无法充分模拟三维医学图像中的片内和片间关系。为了克服这些挑战,我们提出了一种用于医学图像中多器官分割的可变形聚合 UNet(DAUNet)。我们在由残差块组成的 UNet 结构中引入了两个专门设计的模块。具体来说,可变形聚合模块(DAM)结合了可变形感受野和门融合方法。这增强了来自不同层次的信息的融合,使 DAUNet 能够适应腹部区域内不同尺度的多个器官或结构的分割。与此同时,傅立叶注意力模块(FAM)利用傅立叶卷积来增强整个三维医学影像体积内的长程依赖性和关系,以适应不同方向的详细结构变化。该网络在三个公开数据集(ACDC、BTCV 和 BraTS-Africa2024)以及用于 TME-CT 分割任务的私人临床数据集上进行了评估。与现有的最先进(SOTA)方法相比,DAUNet 性能更优,在四个数据集上的平均 Dice 分数分别提高了 0.96%、2.52%、1.60% 和 4.57%。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
Pattern Recognition Letters
Pattern Recognition Letters 工程技术-计算机:人工智能
CiteScore
12.40
自引率
5.90%
发文量
287
审稿时长
9.1 months
期刊介绍: Pattern Recognition Letters aims at rapid publication of concise articles of a broad interest in pattern recognition. Subject areas include all the current fields of interest represented by the Technical Committees of the International Association of Pattern Recognition, and other developing themes involving learning and recognition.
期刊最新文献
SR-LBSCC: Super resolution based screen content image compression at low bitrate MSNet: Multi-task self-supervised network for time series classification Editorial Board Bi-focus cosine complement network for few-shot fine-grained image classification Enhancing visual adversarial transferability via affine transformation of intermediate-level perturbations
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1