CIS-UNet: Multi-class segmentation of the aorta in computed tomography angiography via context-aware shifted window self-attention.

IF 5.4 2区 医学 Q1 ENGINEERING, BIOMEDICAL Computerized Medical Imaging and Graphics Pub Date : 2024-11-19 DOI:10.1016/j.compmedimag.2024.102470
Muhammad Imran, Jonathan R Krebs, Veera Rajasekhar Reddy Gopu, Brian Fazzone, Vishal Balaji Sivaraman, Amarjeet Kumar, Chelsea Viscardi, Robert Evans Heithaus, Benjamin Shickel, Yuyin Zhou, Michol A Cooper, Wei Shao
{"title":"CIS-UNet: Multi-class segmentation of the aorta in computed tomography angiography via context-aware shifted window self-attention.","authors":"Muhammad Imran, Jonathan R Krebs, Veera Rajasekhar Reddy Gopu, Brian Fazzone, Vishal Balaji Sivaraman, Amarjeet Kumar, Chelsea Viscardi, Robert Evans Heithaus, Benjamin Shickel, Yuyin Zhou, Michol A Cooper, Wei Shao","doi":"10.1016/j.compmedimag.2024.102470","DOIUrl":null,"url":null,"abstract":"<p><p>Advancements in medical imaging and endovascular grafting have facilitated minimally invasive treatments for aortic diseases. Accurate 3D segmentation of the aorta and its branches is crucial for interventions, as inaccurate segmentation can lead to erroneous surgical planning and endograft construction. Previous methods simplified aortic segmentation as a binary image segmentation problem, overlooking the necessity of distinguishing between individual aortic branches. In this paper, we introduce Context-Infused Swin-UNet (CIS-UNet), a deep learning model designed for multi-class segmentation of the aorta and thirteen aortic branches. Combining the strengths of Convolutional Neural Networks (CNNs) and Swin transformers, CIS-UNet adopts a hierarchical encoder-decoder structure comprising a CNN encoder, a symmetric decoder, skip connections, and a novel Context-aware Shifted Window Self-Attention (CSW-SA) module as the bottleneck block. Notably, CSW-SA introduces a unique adaptation of the patch merging layer, distinct from its traditional use in the Swin transformers. CSW-SA efficiently condenses the feature map, providing a global spatial context, and enhances performance when applied at the bottleneck layer, offering superior computational efficiency and segmentation accuracy compared to the Swin transformers. We evaluated our model on computed tomography (CT) scans from 59 patients through a 4-fold cross-validation. CIS-UNet outperformed the state-of-the-art Swin UNetR segmentation model by achieving a superior mean Dice coefficient of 0.732 compared to 0.717 and a mean surface distance of 2.40 mm compared to 2.75 mm. CIS-UNet's superior 3D aortic segmentation offers improved accuracy and optimization for planning endovascular treatments. Our dataset and code will be made publicly available at https://github.com/mirthAI/CIS-UNet.</p>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":"118 ","pages":"102470"},"PeriodicalIF":5.4000,"publicationDate":"2024-11-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Computerized Medical Imaging and Graphics","FirstCategoryId":"5","ListUrlMain":"https://doi.org/10.1016/j.compmedimag.2024.102470","RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ENGINEERING, BIOMEDICAL","Score":null,"Total":0}
引用次数: 0

Abstract

Advancements in medical imaging and endovascular grafting have facilitated minimally invasive treatments for aortic diseases. Accurate 3D segmentation of the aorta and its branches is crucial for interventions, as inaccurate segmentation can lead to erroneous surgical planning and endograft construction. Previous methods simplified aortic segmentation as a binary image segmentation problem, overlooking the necessity of distinguishing between individual aortic branches. In this paper, we introduce Context-Infused Swin-UNet (CIS-UNet), a deep learning model designed for multi-class segmentation of the aorta and thirteen aortic branches. Combining the strengths of Convolutional Neural Networks (CNNs) and Swin transformers, CIS-UNet adopts a hierarchical encoder-decoder structure comprising a CNN encoder, a symmetric decoder, skip connections, and a novel Context-aware Shifted Window Self-Attention (CSW-SA) module as the bottleneck block. Notably, CSW-SA introduces a unique adaptation of the patch merging layer, distinct from its traditional use in the Swin transformers. CSW-SA efficiently condenses the feature map, providing a global spatial context, and enhances performance when applied at the bottleneck layer, offering superior computational efficiency and segmentation accuracy compared to the Swin transformers. We evaluated our model on computed tomography (CT) scans from 59 patients through a 4-fold cross-validation. CIS-UNet outperformed the state-of-the-art Swin UNetR segmentation model by achieving a superior mean Dice coefficient of 0.732 compared to 0.717 and a mean surface distance of 2.40 mm compared to 2.75 mm. CIS-UNet's superior 3D aortic segmentation offers improved accuracy and optimization for planning endovascular treatments. Our dataset and code will be made publicly available at https://github.com/mirthAI/CIS-UNet.

查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
CIS-UNet:通过上下文感知移动窗口自我关注,在计算机断层扫描血管造影中对主动脉进行多类分割。
医学成像和血管内移植技术的进步促进了主动脉疾病的微创治疗。对主动脉及其分支进行准确的三维分割对介入治疗至关重要,因为不准确的分割会导致错误的手术规划和内移植术。以前的方法将主动脉分割简化为二元图像分割问题,忽略了区分各个主动脉分支的必要性。本文介绍了一种深度学习模型 Context-Infused Swin-UNet(CIS-UNet),该模型专为主动脉和十三个主动脉分支的多类分割而设计。CIS-UNet 结合了卷积神经网络(CNN)和 Swin 变换器的优势,采用分层编码器-解码器结构,包括 CNN 编码器、对称解码器、跳转连接和作为瓶颈块的新型上下文感知移窗自注意(CSW-SA)模块。值得注意的是,CSW-SA 引入了对补丁合并层的独特调整,有别于斯温变换器中对其的传统使用。CSW-SA 能有效浓缩特征图,提供全局空间背景,并在瓶颈层应用时提高性能,与 Swin 变换器相比,具有更高的计算效率和分割精度。通过 4 倍交叉验证,我们对 59 名患者的计算机断层扫描(CT)扫描结果进行了评估。CIS-UNet 的平均 Dice 系数为 0.732,优于 0.717;平均表面距离为 2.40 毫米,优于 2.75 毫米。CIS-UNet 卓越的三维主动脉分割为规划血管内治疗提供了更高的准确性和优化性。我们的数据集和代码将在 https://github.com/mirthAI/CIS-UNet 上公布。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
CiteScore
10.70
自引率
3.50%
发文量
71
审稿时长
26 days
期刊介绍: The purpose of the journal Computerized Medical Imaging and Graphics is to act as a source for the exchange of research results concerning algorithmic advances, development, and application of digital imaging in disease detection, diagnosis, intervention, prevention, precision medicine, and population health. Included in the journal will be articles on novel computerized imaging or visualization techniques, including artificial intelligence and machine learning, augmented reality for surgical planning and guidance, big biomedical data visualization, computer-aided diagnosis, computerized-robotic surgery, image-guided therapy, imaging scanning and reconstruction, mobile and tele-imaging, radiomics, and imaging integration and modeling with other information relevant to digital health. The types of biomedical imaging include: magnetic resonance, computed tomography, ultrasound, nuclear medicine, X-ray, microwave, optical and multi-photon microscopy, video and sensory imaging, and the convergence of biomedical images with other non-imaging datasets.
期刊最新文献
Single color digital H&E staining with In-and-Out Net. Cervical OCT image classification using contrastive masked autoencoders with Swin Transformer. Circumpapillary OCT-based multi-sector analysis of retinal layer thickness in patients with glaucoma and high myopia. Dual attention model with reinforcement learning for classification of histology whole-slide images. CIS-UNet: Multi-class segmentation of the aorta in computed tomography angiography via context-aware shifted window self-attention.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1