BDFormer: Boundary-aware dual-decoder transformer for skin lesion segmentation

IF 6.1 2区 医学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Artificial Intelligence in Medicine Pub Date : 2025-02-15 DOI:10.1016/j.artmed.2025.103079
Zexuan Ji, Yuxuan Ye, Xiao Ma
{"title":"BDFormer: Boundary-aware dual-decoder transformer for skin lesion segmentation","authors":"Zexuan Ji,&nbsp;Yuxuan Ye,&nbsp;Xiao Ma","doi":"10.1016/j.artmed.2025.103079","DOIUrl":null,"url":null,"abstract":"<div><div>Segmenting skin lesions from dermatoscopic images is crucial for improving the quantitative analysis of skin cancer. However, automatic segmentation of skin lesions remains a challenging task due to the presence of unclear boundaries, artifacts, and obstacles such as hair and veins, all of which complicate the segmentation process. Transformers have demonstrated superior capabilities in capturing long-range dependencies through self-attention mechanisms and are gradually replacing CNNs in this domain. However, one of their primary limitations is the inability to effectively capture local details, which is crucial for handling unclear boundaries and significantly affects segmentation accuracy. To address this issue, we propose a novel boundary-aware dual-decoder transformer that employs a single encoder and dual-decoder framework for both skin lesion segmentation and dilated boundary segmentation. Within this model, we introduce a shifted window cross-attention block to build the dual-decoder structure and apply multi-task distillation to enable efficient interaction of inter-task information. Additionally, we propose a multi-scale aggregation strategy to refine the extracted features, ensuring optimal predictions. To further enhance boundary details, we incorporate a dilated boundary loss function, which expands the single-pixel boundary mask into planar information. We also introduce a task-wise consistency loss to promote consistency across tasks. Our method is evaluated on three datasets: ISIC2018, ISIC2017, and <span><math><mi>PH</mi></math></span><sup>2</sup>, yielding promising results with excellent performance compared to state-of-the-art models. The code is available at <span><span>https://github.com/Yuxuan-Ye/BDFormer</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":55458,"journal":{"name":"Artificial Intelligence in Medicine","volume":"162 ","pages":"Article 103079"},"PeriodicalIF":6.1000,"publicationDate":"2025-02-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Artificial Intelligence in Medicine","FirstCategoryId":"5","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0933365725000144","RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

Abstract

Segmenting skin lesions from dermatoscopic images is crucial for improving the quantitative analysis of skin cancer. However, automatic segmentation of skin lesions remains a challenging task due to the presence of unclear boundaries, artifacts, and obstacles such as hair and veins, all of which complicate the segmentation process. Transformers have demonstrated superior capabilities in capturing long-range dependencies through self-attention mechanisms and are gradually replacing CNNs in this domain. However, one of their primary limitations is the inability to effectively capture local details, which is crucial for handling unclear boundaries and significantly affects segmentation accuracy. To address this issue, we propose a novel boundary-aware dual-decoder transformer that employs a single encoder and dual-decoder framework for both skin lesion segmentation and dilated boundary segmentation. Within this model, we introduce a shifted window cross-attention block to build the dual-decoder structure and apply multi-task distillation to enable efficient interaction of inter-task information. Additionally, we propose a multi-scale aggregation strategy to refine the extracted features, ensuring optimal predictions. To further enhance boundary details, we incorporate a dilated boundary loss function, which expands the single-pixel boundary mask into planar information. We also introduce a task-wise consistency loss to promote consistency across tasks. Our method is evaluated on three datasets: ISIC2018, ISIC2017, and PH2, yielding promising results with excellent performance compared to state-of-the-art models. The code is available at https://github.com/Yuxuan-Ye/BDFormer.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
求助全文
约1分钟内获得全文 去求助
来源期刊
Artificial Intelligence in Medicine
Artificial Intelligence in Medicine 工程技术-工程:生物医学
CiteScore
15.00
自引率
2.70%
发文量
143
审稿时长
6.3 months
期刊介绍: Artificial Intelligence in Medicine publishes original articles from a wide variety of interdisciplinary perspectives concerning the theory and practice of artificial intelligence (AI) in medicine, medically-oriented human biology, and health care. Artificial intelligence in medicine may be characterized as the scientific discipline pertaining to research studies, projects, and applications that aim at supporting decision-based medical tasks through knowledge- and/or data-intensive computer-based solutions that ultimately support and improve the performance of a human care provider.
期刊最新文献
Hyperbolic multivariate feature learning in higher-order heterogeneous networks for drug–disease prediction Editorial Board BDFormer: Boundary-aware dual-decoder transformer for skin lesion segmentation Finger-aware Artificial Neural Network for predicting arthritis in Patients with hand pain Artificial intelligence-driven approaches in antibiotic stewardship programs and optimizing prescription practices: A systematic review
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1