A Comparison of CNN-based Image Feature Extractors for Weld Defects Classification

Tito Wahyu Purnomo, Harun Al Rasyid Ramadhany, Hapsara Hadi Carita Jati, Djati Handoko
{"title":"A Comparison of CNN-based Image Feature Extractors for Weld Defects Classification","authors":"Tito Wahyu Purnomo, Harun Al Rasyid Ramadhany, Hapsara Hadi Carita Jati, Djati Handoko","doi":"10.13057/ijap.v14i1.72509","DOIUrl":null,"url":null,"abstract":"Classification of the types of weld defects is one of the stages of evaluating radiographic images, which is an essential step in controlling the quality of welded joints in materials. By automating the weld defects classification based on deep learning and the CNN architecture, it is possible to overcome the limitations of visually or manually evaluating radiographic images. Good accuracy in classification models for weld defects requires the availability of sufficient datasets. In reality, however, the radiographic image dataset accessible to the public is limited and imbalanced between classes. Consequently, simple image cropping and augmentation techniques are implemented during the data preparation stage. To construct a weld defect classification model, we proposed to utilize the transfer learning method by employing a pre-trained CNN architecture as a feature extractor, including DenseNet201, InceptionV3, MobileNetV2, NASNetMobile, ResNet50V2, VGG16, VGG19, and Xception, which are linked to a simple classification model based on multilayer perceptron. The test results indicate that the three best classification models were obtained by using the DenseNet201 feature extractor with a test accuracy value of 100%, followed by ResNet50V2 and InceptionV3 with an accuracy of 99.17%. These outcomes are better compared to state-of-the-art classification models with a maximum of six classes of defects. The research findings may assist radiography experts in evaluating radiographic images more accurately and efficiently.","PeriodicalId":31930,"journal":{"name":"Indonesian Journal of Applied Physics","volume":"49 10","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-05-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Indonesian Journal of Applied Physics","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.13057/ijap.v14i1.72509","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Classification of the types of weld defects is one of the stages of evaluating radiographic images, which is an essential step in controlling the quality of welded joints in materials. By automating the weld defects classification based on deep learning and the CNN architecture, it is possible to overcome the limitations of visually or manually evaluating radiographic images. Good accuracy in classification models for weld defects requires the availability of sufficient datasets. In reality, however, the radiographic image dataset accessible to the public is limited and imbalanced between classes. Consequently, simple image cropping and augmentation techniques are implemented during the data preparation stage. To construct a weld defect classification model, we proposed to utilize the transfer learning method by employing a pre-trained CNN architecture as a feature extractor, including DenseNet201, InceptionV3, MobileNetV2, NASNetMobile, ResNet50V2, VGG16, VGG19, and Xception, which are linked to a simple classification model based on multilayer perceptron. The test results indicate that the three best classification models were obtained by using the DenseNet201 feature extractor with a test accuracy value of 100%, followed by ResNet50V2 and InceptionV3 with an accuracy of 99.17%. These outcomes are better compared to state-of-the-art classification models with a maximum of six classes of defects. The research findings may assist radiography experts in evaluating radiographic images more accurately and efficiently.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
基于 CNN 的焊接缺陷分类图像特征提取器比较
焊接缺陷类型分类是射线图像评估的其中一个阶段,也是控制材料焊接接头质量的重要步骤。通过基于深度学习和 CNN 架构的焊接缺陷自动分类,可以克服目视或手动评估射线图像的局限性。要使焊接缺陷分类模型具有良好的准确性,就必须有足够的数据集。然而,在现实中,公众可获得的射线图像数据集非常有限,且不同类别之间不平衡。因此,在数据准备阶段采用了简单的图像裁剪和增强技术。为了构建焊缝缺陷分类模型,我们建议利用迁移学习方法,采用预先训练好的 CNN 架构作为特征提取器,包括 DenseNet201、InceptionV3、MobileNetV2、NASNetMobile、ResNet50V2、VGG16、VGG19 和 Xception,并将其与基于多层感知器的简单分类模型相连接。测试结果表明,使用 DenseNet201 特征提取器获得了三个最佳分类模型,测试准确率为 100%,其次是 ResNet50V2 和 InceptionV3,准确率为 99.17%。与最先进的分类模型(最多可分为六类缺陷)相比,这些结果更为理想。这些研究成果可帮助放射学专家更准确、更高效地评估放射图像。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
28
审稿时长
12 weeks
期刊最新文献
A spin current detecting device working in the drift-diffusion and degenerate regimes Determining The Crystallite Size of TiO2/EG-Water XRD Data Using the Scherrer Equation Synthesis of Material Composite rGO-TIO2 From Coconut Shells by Sol-Gel Methods as Photocatalyst A Comparison of CNN-based Image Feature Extractors for Weld Defects Classification Optimizing the Composition of Basalt and Heat Treatment of Fly Ash-Based Mullite Ceramics Using the Taguchi Method
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1