UGS-M3F: unified gated swin transformer with multi-feature fully fusion for retinal blood vessel segmentation.

IF 3.2 3区 医学 Q2 RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING BMC Medical Imaging Pub Date : 2025-03-06 DOI:10.1186/s12880-025-01616-1
Ibtissam Bakkouri, Siham Bakkouri
{"title":"UGS-M3F: unified gated swin transformer with multi-feature fully fusion for retinal blood vessel segmentation.","authors":"Ibtissam Bakkouri, Siham Bakkouri","doi":"10.1186/s12880-025-01616-1","DOIUrl":null,"url":null,"abstract":"<p><p>Automated segmentation of retinal blood vessels in fundus images plays a key role in providing ophthalmologists with critical insights for the non-invasive diagnosis of common eye diseases. Early and precise detection of these conditions is essential for preserving vision, making vessel segmentation crucial for identifying vascular diseases that pose a threat to vision. However, accurately segmenting blood vessels in fundus images is challenging due to factors such as significant variability in vessel scale and appearance, occlusions, complex backgrounds, variations in image quality, and the intricate branching patterns of retinal vessels. To overcome these challenges, the Unified Gated Swin Transformer with Multi-Feature Full Fusion (UGS-M3F) model has been developed as a powerful deep learning framework tailored for retinal vessel segmentation. UGS-M3F leverages its Unified Multi-Context Feature Fusion (UM2F) and Gated Boundary-Aware Swin Transformer (GBS-T) modules to capture contextual information across different levels. The UM2F module enhances the extraction of detailed vessel features, while the GBS-T module emphasizes small vessel detection and ensures extensive coverage of large vessels. Extensive experimental results on publicly available datasets, including FIVES, DRIVE, STARE, and CHAS_DB1, show that UGS-M3F significantly outperforms existing state-of-the-art methods. Specifically, UGS-M3F achieves a Dice Coefficient (DC) improvement of 2.12% on FIVES, 1.94% on DRIVE, 2.52% on STARE, and 2.14% on CHAS_DB1 compared to the best-performing baseline. This improvement in segmentation accuracy has the potential to revolutionize diagnostic techniques, allowing for more precise disease identification and management across a range of ocular conditions.</p>","PeriodicalId":9020,"journal":{"name":"BMC Medical Imaging","volume":"25 1","pages":"77"},"PeriodicalIF":3.2000,"publicationDate":"2025-03-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11887399/pdf/","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"BMC Medical Imaging","FirstCategoryId":"3","ListUrlMain":"https://doi.org/10.1186/s12880-025-01616-1","RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING","Score":null,"Total":0}
引用次数: 0

Abstract

Automated segmentation of retinal blood vessels in fundus images plays a key role in providing ophthalmologists with critical insights for the non-invasive diagnosis of common eye diseases. Early and precise detection of these conditions is essential for preserving vision, making vessel segmentation crucial for identifying vascular diseases that pose a threat to vision. However, accurately segmenting blood vessels in fundus images is challenging due to factors such as significant variability in vessel scale and appearance, occlusions, complex backgrounds, variations in image quality, and the intricate branching patterns of retinal vessels. To overcome these challenges, the Unified Gated Swin Transformer with Multi-Feature Full Fusion (UGS-M3F) model has been developed as a powerful deep learning framework tailored for retinal vessel segmentation. UGS-M3F leverages its Unified Multi-Context Feature Fusion (UM2F) and Gated Boundary-Aware Swin Transformer (GBS-T) modules to capture contextual information across different levels. The UM2F module enhances the extraction of detailed vessel features, while the GBS-T module emphasizes small vessel detection and ensures extensive coverage of large vessels. Extensive experimental results on publicly available datasets, including FIVES, DRIVE, STARE, and CHAS_DB1, show that UGS-M3F significantly outperforms existing state-of-the-art methods. Specifically, UGS-M3F achieves a Dice Coefficient (DC) improvement of 2.12% on FIVES, 1.94% on DRIVE, 2.52% on STARE, and 2.14% on CHAS_DB1 compared to the best-performing baseline. This improvement in segmentation accuracy has the potential to revolutionize diagnostic techniques, allowing for more precise disease identification and management across a range of ocular conditions.

查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
UGS-M3F:多特征完全融合的统一门控旋转变压器,用于视网膜血管分割。
眼底图像中视网膜血管的自动分割在为眼科医生提供常见眼病的非侵入性诊断方面发挥着关键作用。这些疾病的早期和精确检测对于保护视力至关重要,因此血管分割对于识别对视力构成威胁的血管疾病至关重要。然而,由于血管规模和外观的显著变化、闭塞、复杂的背景、图像质量的变化以及视网膜血管复杂的分支模式等因素,准确分割眼底图像中的血管是具有挑战性的。为了克服这些挑战,我们开发了具有多特征全融合的统一门控Swin变压器(UGS-M3F)模型,作为为视网膜血管分割量身定制的强大深度学习框架。UGS-M3F利用其统一多上下文特征融合(UM2F)和门控边界感知Swin变压器(GBS-T)模块来捕获不同级别的上下文信息。UM2F模块增强了详细船舶特征的提取,而GBS-T模块强调小型船舶检测,并确保大型船舶的广泛覆盖。在公开可用的数据集(包括FIVES、DRIVE、STARE和CHAS_DB1)上进行的大量实验结果表明,UGS-M3F显著优于现有的最先进的方法。具体来说,与最佳性能基线相比,UGS-M3F在FIVES上实现了2.12%的Dice Coefficient (DC)改进,在DRIVE上实现了1.94%,在STARE上实现了2.52%,在CHAS_DB1上实现了2.14%。这种分割准确性的提高有可能彻底改变诊断技术,允许在一系列眼部疾病中更精确地识别和管理疾病。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
BMC Medical Imaging
BMC Medical Imaging RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING-
CiteScore
4.60
自引率
3.70%
发文量
198
审稿时长
27 weeks
期刊介绍: BMC Medical Imaging is an open access journal publishing original peer-reviewed research articles in the development, evaluation, and use of imaging techniques and image processing tools to diagnose and manage disease.
期刊最新文献
Ensemble learning strategy-based 18 F-FDG PET/CT metabolic habitats radiomics for predicting EGFR mutation and prognosis in LA-NSCLC: a multi-center study. MRI-Based peritumoral radiomics for predicting recurrence risk in ER+/HER2- breast cancer. COVID-19 infection during the Omicron wave changed carotid structure compared with uninfected controls: a longitudinal study. Investigate the quantification accuracy of small lesions in oncological 18F-FDG PET/CT using a deep progressive learning reconstruction method. Feasibility of neurite oriented diffusion and density imaging in thigh skeletal muscle of volunteers with knee pain: relationship with proton density fat fraction: a cross-sectional study.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1