Interactive defect segmentation in welding radiographic images based on artificial features fusion

IF 4.5 2区 材料科学 Q1 MATERIALS SCIENCE, CHARACTERIZATION & TESTING Ndt & E International Pub Date : 2025-04-01 Epub Date: 2024-12-06 DOI:10.1016/j.ndteint.2024.103305
Z.H. Yan , B.W. Ji , H. Xu , J. Fang
{"title":"Interactive defect segmentation in welding radiographic images based on artificial features fusion","authors":"Z.H. Yan ,&nbsp;B.W. Ji ,&nbsp;H. Xu ,&nbsp;J. Fang","doi":"10.1016/j.ndteint.2024.103305","DOIUrl":null,"url":null,"abstract":"<div><div>In recent years, deep learning technology has been used in the defect detection of weld radiographic images with its rapid development. However, there are several questions need to be solved for the wide application of deep learning technology in engineering. First, the lack of prior information due to the lack of large number of training data limits the performance of the model; Secondly, it takes too long for the labeling work of manual discrimination. In addition, when the deep learning prediction is wrong, it is very difficult for human intervention to correct. To solve these problems, a human-computer interaction method for weld defect detection based on HRNet + OCR deep learning model was suggested in this work. In the data set preparation stage, different from the previous processing methods, this paper eliminates the pure background images that do not contain instances, and then not only segmenting the defects in the weld images, but also making different labeling maps for different types of defects and pseudo-defects respectively, solving the problem that the network pays too much attention to the semantic information of the image while ignoring the user interaction when predicting was solved. In the artificial feature extraction phase, based on human experience, the ray image is processed to enhance the non-equilibrium region in the image, especially the non-equilibrium region with small size and weak intensity. Artificial features were integrated into the network, to obtain a stronger and more robust ability to focus and extract the unbalanced areas in the image, this paper proposes to artificial features. The experimental results showed that the best performance of the network can be achieved when the artificial feature convolution kernel with foreground scale of 3 pixels, background scales of 15 pixels and 31 pixels is used in the test data. Through this method, the model can achieve 2.30 and 3.67 in Noc@75 and Noc@80, compared to the model without fusion of artificial features which improves 68.7 % and 64.3 % in Noc@75 and Noc@80, respectively.</div></div>","PeriodicalId":18868,"journal":{"name":"Ndt & E International","volume":"151 ","pages":"Article 103305"},"PeriodicalIF":4.5000,"publicationDate":"2025-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Ndt & E International","FirstCategoryId":"88","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0963869524002706","RegionNum":2,"RegionCategory":"材料科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2024/12/6 0:00:00","PubModel":"Epub","JCR":"Q1","JCRName":"MATERIALS SCIENCE, CHARACTERIZATION & TESTING","Score":null,"Total":0}
引用次数: 0

Abstract

In recent years, deep learning technology has been used in the defect detection of weld radiographic images with its rapid development. However, there are several questions need to be solved for the wide application of deep learning technology in engineering. First, the lack of prior information due to the lack of large number of training data limits the performance of the model; Secondly, it takes too long for the labeling work of manual discrimination. In addition, when the deep learning prediction is wrong, it is very difficult for human intervention to correct. To solve these problems, a human-computer interaction method for weld defect detection based on HRNet + OCR deep learning model was suggested in this work. In the data set preparation stage, different from the previous processing methods, this paper eliminates the pure background images that do not contain instances, and then not only segmenting the defects in the weld images, but also making different labeling maps for different types of defects and pseudo-defects respectively, solving the problem that the network pays too much attention to the semantic information of the image while ignoring the user interaction when predicting was solved. In the artificial feature extraction phase, based on human experience, the ray image is processed to enhance the non-equilibrium region in the image, especially the non-equilibrium region with small size and weak intensity. Artificial features were integrated into the network, to obtain a stronger and more robust ability to focus and extract the unbalanced areas in the image, this paper proposes to artificial features. The experimental results showed that the best performance of the network can be achieved when the artificial feature convolution kernel with foreground scale of 3 pixels, background scales of 15 pixels and 31 pixels is used in the test data. Through this method, the model can achieve 2.30 and 3.67 in Noc@75 and Noc@80, compared to the model without fusion of artificial features which improves 68.7 % and 64.3 % in Noc@75 and Noc@80, respectively.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
基于人工特征融合的焊接射线图像交互式缺陷分割
近年来,深度学习技术在焊缝射线图像的缺陷检测中得到了迅速的发展。然而,深度学习技术在工程中的广泛应用还需要解决几个问题。首先,由于缺乏大量的训练数据,导致先验信息的缺乏,限制了模型的性能;其次,手工判别的标注工作耗时太长。此外,当深度学习预测出错时,人工干预很难纠正。针对这些问题,本文提出了一种基于HRNet + OCR深度学习模型的焊缝缺陷检测人机交互方法。在数据集准备阶段,与以往的处理方法不同,本文剔除了不包含实例的纯背景图像,然后不仅对焊缝图像中的缺陷进行分割,还对不同类型的缺陷和伪缺陷分别制作了不同的标注图,解决了网络在预测时过于关注图像的语义信息而忽略用户交互的问题。在人工特征提取阶段,根据人的经验,对射线图像进行处理,增强图像中的非平衡区域,特别是尺寸小、强度弱的非平衡区域。将人工特征集成到网络中,为了获得更强、更鲁棒的图像不平衡区域的聚焦和提取能力,本文提出了人工特征。实验结果表明,在测试数据中使用前景尺度为3像素、背景尺度为15像素和31像素的人工特征卷积核可以获得最佳的网络性能。与未融合人工特征的模型相比,该模型在Noc@75和Noc@80分别提高了68.7%和64.3%,在Noc@75和Noc@80分别达到2.30和3.67。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
Ndt & E International
Ndt & E International 工程技术-材料科学:表征与测试
CiteScore
7.20
自引率
9.50%
发文量
121
审稿时长
55 days
期刊介绍: NDT&E international publishes peer-reviewed results of original research and development in all categories of the fields of nondestructive testing and evaluation including ultrasonics, electromagnetics, radiography, optical and thermal methods. In addition to traditional NDE topics, the emerging technology area of inspection of civil structures and materials is also emphasized. The journal publishes original papers on research and development of new inspection techniques and methods, as well as on novel and innovative applications of established methods. Papers on NDE sensors and their applications both for inspection and process control, as well as papers describing novel NDE systems for structural health monitoring and their performance in industrial settings are also considered. Other regular features include international news, new equipment and a calendar of forthcoming worldwide meetings. This journal is listed in Current Contents.
期刊最新文献
Reliability of in-situ asphalt concrete density prediction by roller-mounted GPR Stochastic full-scale pore network generation for porous building materials: case study on ceramic brick Detection and spatial localization of dam anomalies using autoencoder residuals Integrated bimodal electromagnetic acoustic transducer with shared physical fields for multi-parameter measurement of coated metals Phase consistency weighted adaptive sparse RTM for ultrasonic imaging of coarse-grained gas turbine blades
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1