A pyramid auxiliary supervised U-Net model for road crack detection with dual-attention mechanism

IF 3.7 2区 工程技术 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Displays Pub Date : 2024-06-27 DOI:10.1016/j.displa.2024.102787
Yingxiang Lu, Guangyuan Zhang, Shukai Duan, Feng Chen
{"title":"A pyramid auxiliary supervised U-Net model for road crack detection with dual-attention mechanism","authors":"Yingxiang Lu,&nbsp;Guangyuan Zhang,&nbsp;Shukai Duan,&nbsp;Feng Chen","doi":"10.1016/j.displa.2024.102787","DOIUrl":null,"url":null,"abstract":"<div><p>The application of road crack detection technology plays a pivotal role in the domain of transportation infrastructure management. However, the diversity of crack morphologies within images and the complexity of background noise still pose significant challenges to automated detection technologies. This necessitates that deep learning models possess more precise feature extraction capabilities and resistance to noise interference. In this paper, we propose a pyramid auxiliary supervised U-Net model with Dual-Attention mechanism. Pyramid auxiliary supervision module is integrated into the U-Net model, alleviating information loss at the encoder end due to pooling operations, thereby enhancing its global perception capability. Besides, within dual-attention module, our model learns crucial segmentation features both at the pixel and channel levels. These enable our model to avoid noise interference and achieve a higher level of precision in crack pixel segmentation. To substantiate the superiority and generalizability of our model, we conducted a comprehensive performance evaluation using public datasets. The experimental results indicate that our model surpasses current great methods. Additionally, we performed ablation studies to confirm the efficacy of the proposed modules.</p></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":"84 ","pages":"Article 102787"},"PeriodicalIF":3.7000,"publicationDate":"2024-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Displays","FirstCategoryId":"5","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0141938224001513","RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, HARDWARE & ARCHITECTURE","Score":null,"Total":0}
引用次数: 0

Abstract

The application of road crack detection technology plays a pivotal role in the domain of transportation infrastructure management. However, the diversity of crack morphologies within images and the complexity of background noise still pose significant challenges to automated detection technologies. This necessitates that deep learning models possess more precise feature extraction capabilities and resistance to noise interference. In this paper, we propose a pyramid auxiliary supervised U-Net model with Dual-Attention mechanism. Pyramid auxiliary supervision module is integrated into the U-Net model, alleviating information loss at the encoder end due to pooling operations, thereby enhancing its global perception capability. Besides, within dual-attention module, our model learns crucial segmentation features both at the pixel and channel levels. These enable our model to avoid noise interference and achieve a higher level of precision in crack pixel segmentation. To substantiate the superiority and generalizability of our model, we conducted a comprehensive performance evaluation using public datasets. The experimental results indicate that our model surpasses current great methods. Additionally, we performed ablation studies to confirm the efficacy of the proposed modules.

查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
用于双关注机制道路裂缝检测的金字塔辅助监督 U-Net 模型
道路裂缝检测技术的应用在交通基础设施管理领域发挥着举足轻重的作用。然而,图像中裂缝形态的多样性和背景噪声的复杂性仍然给自动检测技术带来了巨大挑战。这就要求深度学习模型具备更精确的特征提取能力和抗噪声干扰能力。本文提出了一种具有双关注机制的金字塔辅助监督 U-Net 模型。将金字塔辅助监督模块集成到U-Net模型中,减轻了编码器端由于池化操作造成的信息丢失,从而增强了其全局感知能力。此外,在双关注模块中,我们的模型还能学习像素和通道级别的关键分割特征。这使得我们的模型能够避免噪声干扰,并在裂缝像素分割方面达到更高的精度。为了证实我们的模型的优越性和通用性,我们使用公共数据集进行了全面的性能评估。实验结果表明,我们的模型超越了现有的优秀方法。此外,我们还进行了烧蚀研究,以证实所提模块的功效。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
Displays
Displays 工程技术-工程:电子与电气
CiteScore
4.60
自引率
25.60%
发文量
138
审稿时长
92 days
期刊介绍: Displays is the international journal covering the research and development of display technology, its effective presentation and perception of information, and applications and systems including display-human interface. Technical papers on practical developments in Displays technology provide an effective channel to promote greater understanding and cross-fertilization across the diverse disciplines of the Displays community. Original research papers solving ergonomics issues at the display-human interface advance effective presentation of information. Tutorial papers covering fundamentals intended for display technologies and human factor engineers new to the field will also occasionally featured.
期刊最新文献
Mambav3d: A mamba-based virtual 3D module stringing semantic information between layers of medical image slices Luminance decomposition and Transformer based no-reference tone-mapped image quality assessment GLDBF: Global and local dual-branch fusion network for no-reference point cloud quality assessment Virtual reality in medical education: Effectiveness of Immersive Virtual Anatomy Laboratory (IVAL) compared to traditional learning approaches Weighted ensemble deep learning approach for classification of gastrointestinal diseases in colonoscopy images aided by explainable AI
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1