Crack Detection on Concrete Surfaces Using Deep Encoder-Decoder Convolutional Neural Network: A Comparison Study Between U-Net and DeepLabV3+

Patrick Nicholas Hadinata, Djoni Simanta, L. Eddy, K. Nagai
{"title":"Crack Detection on Concrete Surfaces Using Deep Encoder-Decoder Convolutional Neural Network: A Comparison Study Between U-Net and DeepLabV3+","authors":"Patrick Nicholas Hadinata, Djoni Simanta, L. Eddy, K. Nagai","doi":"10.22146/jcef.65288","DOIUrl":null,"url":null,"abstract":"Maintenance of infrastructures is a crucial activity to ensure safety using crack detection methods on concrete structures. However, most practice of crack detection is carried out manually, which is unsafe, highly subjective, and time-consuming. Therefore, a more accurate and efficient system needs to be implemented using artificial intelligence. Convolutional neural network (CNN), a subset of artificial intelligence, is used to detect cracks on concrete surfaces through semantic image segmentation. The purpose of this research is to compare the effectiveness of cutting-edge encoder-decoder architectures in detecting cracks on concrete surfaces using U-Net and DeepLabV3+ architectures with potential in biomedical, and sparse multiscale image segmentations, respectively. Neural networks were trained using cloud computing with a high-performance Graphics Processing Unit NVIDIA Tesla V100 and 27.4 GB of RAM. This study used internal and external data. Internal data consisted of simple cracks and were used as the training and validation data. Meanwhile, external data consisted of more complex cracks, which were used for further testing. Both architectures were compared based on four evaluation metrics in terms of accuracy, F1, precision, and recall. U-Net achieved segmentation accuracy = 96.57%, F1 = 87.55%, precision = 88.15%, and recall = 88.94%, while DeepLabV3+ achieved segmentation accuracy = 96.47%, F1 = 85.29%, precision = 92.07%, and recall = 81.84%. Experiment results (internal and external data) indicated that both architectures were accurate and effective in segmenting cracks. Additionally, U-Net and DeepLabV3+ exceeded the performance of previously tested architecture, namely FCN.","PeriodicalId":31890,"journal":{"name":"Journal of the Civil Engineering Forum","volume":"44 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2021-08-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"5","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of the Civil Engineering Forum","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.22146/jcef.65288","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 5

Abstract

Maintenance of infrastructures is a crucial activity to ensure safety using crack detection methods on concrete structures. However, most practice of crack detection is carried out manually, which is unsafe, highly subjective, and time-consuming. Therefore, a more accurate and efficient system needs to be implemented using artificial intelligence. Convolutional neural network (CNN), a subset of artificial intelligence, is used to detect cracks on concrete surfaces through semantic image segmentation. The purpose of this research is to compare the effectiveness of cutting-edge encoder-decoder architectures in detecting cracks on concrete surfaces using U-Net and DeepLabV3+ architectures with potential in biomedical, and sparse multiscale image segmentations, respectively. Neural networks were trained using cloud computing with a high-performance Graphics Processing Unit NVIDIA Tesla V100 and 27.4 GB of RAM. This study used internal and external data. Internal data consisted of simple cracks and were used as the training and validation data. Meanwhile, external data consisted of more complex cracks, which were used for further testing. Both architectures were compared based on four evaluation metrics in terms of accuracy, F1, precision, and recall. U-Net achieved segmentation accuracy = 96.57%, F1 = 87.55%, precision = 88.15%, and recall = 88.94%, while DeepLabV3+ achieved segmentation accuracy = 96.47%, F1 = 85.29%, precision = 92.07%, and recall = 81.84%. Experiment results (internal and external data) indicated that both architectures were accurate and effective in segmenting cracks. Additionally, U-Net and DeepLabV3+ exceeded the performance of previously tested architecture, namely FCN.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
基于深度编码器-解码器卷积神经网络的混凝土表面裂纹检测:U-Net与DeepLabV3+的对比研究
基础设施的维修是保证混凝土结构安全的一项重要工作。然而,大多数的裂纹检测实践都是手工进行的,这是不安全的,高度主观的,耗时的。因此,需要使用人工智能来实现更准确、更高效的系统。卷积神经网络(CNN)是人工智能的一个子集,它通过语义图像分割来检测混凝土表面的裂缝。本研究的目的是比较先进的编码器-解码器架构在检测混凝土表面裂缝方面的有效性,U-Net和DeepLabV3+架构分别在生物医学和稀疏多尺度图像分割方面具有潜力。神经网络的训练使用云计算,使用高性能图形处理单元NVIDIA Tesla V100和27.4 GB RAM。本研究使用了内部和外部数据。内部数据由简单裂纹组成,用作训练和验证数据。同时,外部数据由更复杂的裂缝组成,用于进一步的测试。基于准确度、F1、精度和召回率的四个评估指标,对这两种体系结构进行了比较。U-Net的分割准确率为96.57%,F1 = 87.55%,精密度为88.15%,召回率为88.94%,DeepLabV3+的分割准确率为96.47%,F1 = 85.29%,精密度为92.07%,召回率为81.84%。实验结果(内部和外部数据)表明,这两种架构都能准确有效地分割裂缝。此外,U-Net和DeepLabV3+的性能超过了先前测试的架构,即FCN。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
20
审稿时长
15 weeks
期刊最新文献
Airline Choice Decision for Jakarta-Denpasar Route During the Covid-19 Pandemic Comparative Seismic Analysis of G+20 RC Framed Structure Building for with and without Shear Walls Proposal and Evaluation of Vertical Vibration Theory of Air Caster Seismic Vulnerability Assessment of Regular and Vertically Irregular Residential Buildings in Nepal Numerical Study on the Effects of Helix Diameter and Spacing on the Helical Pile Axial Bearing Capacity in Cohesionless Soils
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1