Vision-Based Nut Quality Classification Using Conditional GAN and CNN

IF 6.4 2区 计算机科学 Q1 AUTOMATION & CONTROL SYSTEMS IEEE Transactions on Automation Science and Engineering Pub Date : 2025-01-23 DOI:10.1109/TASE.2025.3533013
Kuei-Jung Hung;Tzu-Chen Lee;Chiao-Sheng Wang;Tsung-Chun Lin;Chen-Wei Conan Guo;Der-Min Tsay;Jau-Woei Perng
{"title":"Vision-Based Nut Quality Classification Using Conditional GAN and CNN","authors":"Kuei-Jung Hung;Tzu-Chen Lee;Chiao-Sheng Wang;Tsung-Chun Lin;Chen-Wei Conan Guo;Der-Min Tsay;Jau-Woei Perng","doi":"10.1109/TASE.2025.3533013","DOIUrl":null,"url":null,"abstract":"In this study, the quality of a nut is discussed by considering images of the internal thread, and an analysis is conducted using traditional machine-learning and deep-learning algorithms. Compared to the traditional contact methods, the vision-based method has the advantage of fast computing speed and is not affected by the conditions of tapping speed. The pitch and pitch diameter of the internal thread are the indicators that characterize nut quality classification. For one nut, 36 internal thread images are collected, one image per 10 degrees, by the self-designed laser triangulation measurement platform. Using the laser triangulation method, the information on both indicators can be obtained and analyzed. In the traditional machine-learning methods, the internal thread images undergo several preprocessing procedures to obtain the region of interest and calculate the depth between the crest and root. Subsequently, 33 handcrafted features are used to extract the features from the 36 processed images. Finally, the features are classified by three families of machine-learning algorithms, including support vector machines, k-nearest neighbors, and decision trees. In the deep-learning method, conditional generative adversarial network and convolutional neural network (CNN) are used for data augmentation and nut quality classification, respectively. The experimental results show that the proposed CNN model can achieve a higher classification accuracy rate. Furthermore, the proposed CNN model trained with the generated images is better equipped to detect the nut quality under different decision thresholds. Note to Practitioners—This study explores the assessment of nut quality, particularly focusing on the internal thread. Traditional methods for grading thread quality often involve contact-based detection, which may risk damaging the thread surface. Additionally, most non-contact methods tend to have prolonged classification processes. This study introduces a vision-based nut-quality classification method based on the Japanese Industrial Standard (JIS) specification, offering advantages such as rapid computation and independence from tapping speed conditions. Internal thread images are collected using a self-designed laser triangulation measurement platform, and various learning algorithms are employed for analysis. In traditional machine-learning approaches, we propose an image preprocessing method to extract 33 statistical features from internal thread images. Feature-importance analysis, calculated from random forest (RF) and gradient boosting (GB), reveals the significance of features such as crest-to-root depth variation between teeth and the brightness of the crest. By performing feature reduction, some machine-learning algorithms can improve the classification accuracy rate and AUC of the model. In the realm of deep learning, we utilize conditional generative adversarial network (CGAN) to generate internal thread images and employ CNN for nut quality classification. Experimental results demonstrate that the proposed CNN model achieves higher classification accuracy with fewer training parameters compared to VGG16, VGG19, ResNet50, and Xception. The deep-learning models, particularly the CNN, outperform traditional machine-learning methods without the need for manual feature extraction. CGAN successfully augments the training dataset, and the models can detect nut quality under different decision thresholds. Although this proposed nut-quality analysis method achieves non-contact detection, it has not yet attained full automation. Future research will endeavor to integrate automated mechanisms for placing nuts onto the inspection platform and to conduct real-time image analysis for continuous assessment of nut quality.","PeriodicalId":51060,"journal":{"name":"IEEE Transactions on Automation Science and Engineering","volume":"22 ","pages":"11455-11468"},"PeriodicalIF":6.4000,"publicationDate":"2025-01-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Automation Science and Engineering","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10851344/","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"AUTOMATION & CONTROL SYSTEMS","Score":null,"Total":0}
引用次数: 0

Abstract

In this study, the quality of a nut is discussed by considering images of the internal thread, and an analysis is conducted using traditional machine-learning and deep-learning algorithms. Compared to the traditional contact methods, the vision-based method has the advantage of fast computing speed and is not affected by the conditions of tapping speed. The pitch and pitch diameter of the internal thread are the indicators that characterize nut quality classification. For one nut, 36 internal thread images are collected, one image per 10 degrees, by the self-designed laser triangulation measurement platform. Using the laser triangulation method, the information on both indicators can be obtained and analyzed. In the traditional machine-learning methods, the internal thread images undergo several preprocessing procedures to obtain the region of interest and calculate the depth between the crest and root. Subsequently, 33 handcrafted features are used to extract the features from the 36 processed images. Finally, the features are classified by three families of machine-learning algorithms, including support vector machines, k-nearest neighbors, and decision trees. In the deep-learning method, conditional generative adversarial network and convolutional neural network (CNN) are used for data augmentation and nut quality classification, respectively. The experimental results show that the proposed CNN model can achieve a higher classification accuracy rate. Furthermore, the proposed CNN model trained with the generated images is better equipped to detect the nut quality under different decision thresholds. Note to Practitioners—This study explores the assessment of nut quality, particularly focusing on the internal thread. Traditional methods for grading thread quality often involve contact-based detection, which may risk damaging the thread surface. Additionally, most non-contact methods tend to have prolonged classification processes. This study introduces a vision-based nut-quality classification method based on the Japanese Industrial Standard (JIS) specification, offering advantages such as rapid computation and independence from tapping speed conditions. Internal thread images are collected using a self-designed laser triangulation measurement platform, and various learning algorithms are employed for analysis. In traditional machine-learning approaches, we propose an image preprocessing method to extract 33 statistical features from internal thread images. Feature-importance analysis, calculated from random forest (RF) and gradient boosting (GB), reveals the significance of features such as crest-to-root depth variation between teeth and the brightness of the crest. By performing feature reduction, some machine-learning algorithms can improve the classification accuracy rate and AUC of the model. In the realm of deep learning, we utilize conditional generative adversarial network (CGAN) to generate internal thread images and employ CNN for nut quality classification. Experimental results demonstrate that the proposed CNN model achieves higher classification accuracy with fewer training parameters compared to VGG16, VGG19, ResNet50, and Xception. The deep-learning models, particularly the CNN, outperform traditional machine-learning methods without the need for manual feature extraction. CGAN successfully augments the training dataset, and the models can detect nut quality under different decision thresholds. Although this proposed nut-quality analysis method achieves non-contact detection, it has not yet attained full automation. Future research will endeavor to integrate automated mechanisms for placing nuts onto the inspection platform and to conduct real-time image analysis for continuous assessment of nut quality.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
使用条件GAN和CNN的基于视觉的坚果质量分类
在本研究中,通过考虑内部螺纹的图像来讨论螺母的质量,并使用传统的机器学习和深度学习算法进行分析。与传统的接触方法相比,基于视觉的方法具有计算速度快、不受攻丝速度条件影响等优点。内螺纹的节距和节径是区分螺母质量等级的指标。通过自行设计的激光三角测量平台,对一个螺母采集36张内螺纹图像,每10度采集一张图像。利用激光三角测量法,可以获得和分析两个指标的信息。在传统的机器学习方法中,内部线程图像经过多次预处理程序来获得感兴趣的区域并计算峰顶和根之间的深度。随后,使用33个手工制作的特征从36个处理后的图像中提取特征。最后,通过三种机器学习算法对特征进行分类,包括支持向量机、k近邻和决策树。在深度学习方法中,分别使用条件生成对抗网络和卷积神经网络(CNN)进行数据增强和坚果质量分类。实验结果表明,本文提出的CNN模型能够达到较高的分类准确率。此外,使用生成的图像训练的CNN模型能够更好地检测不同决策阈值下的坚果质量。从业人员注意事项:本研究探讨了坚果质量的评估,特别关注内部螺纹。传统的螺纹质量分级方法通常涉及基于接触的检测,这可能会破坏螺纹表面。此外,大多数非接触式方法往往有较长的分类过程。本文介绍了一种基于日本工业标准(JIS)规范的基于视觉的坚果质量分类方法,该方法具有计算速度快、不受攻丝速度条件影响等优点。采用自行设计的激光三角测量平台采集内部螺纹图像,并采用多种学习算法进行分析。在传统的机器学习方法中,我们提出了一种图像预处理方法,从内部线程图像中提取33个统计特征。通过随机森林(RF)和梯度增强(GB)计算特征重要性分析,揭示了牙冠与牙根深度变化和牙冠亮度等特征的重要性。通过特征约简,一些机器学习算法可以提高模型的分类准确率和AUC。在深度学习领域,我们使用条件生成对抗网络(CGAN)生成内部线程图像,并使用CNN进行坚果质量分类。实验结果表明,与VGG16、VGG19、ResNet50和Xception相比,本文提出的CNN模型在训练参数较少的情况下获得了更高的分类精度。深度学习模型,特别是CNN,在不需要人工特征提取的情况下优于传统的机器学习方法。CGAN成功地增强了训练数据集,模型可以在不同的决策阈值下检测坚果质量。虽然提出的坚果质量分析方法实现了非接触检测,但尚未实现完全自动化。未来的研究将努力集成将螺母放置到检测平台的自动化机制,并进行实时图像分析,以持续评估螺母质量。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
IEEE Transactions on Automation Science and Engineering
IEEE Transactions on Automation Science and Engineering 工程技术-自动化与控制系统
CiteScore
12.50
自引率
14.30%
发文量
404
审稿时长
3.0 months
期刊介绍: The IEEE Transactions on Automation Science and Engineering (T-ASE) publishes fundamental papers on Automation, emphasizing scientific results that advance efficiency, quality, productivity, and reliability. T-ASE encourages interdisciplinary approaches from computer science, control systems, electrical engineering, mathematics, mechanical engineering, operations research, and other fields. T-ASE welcomes results relevant to industries such as agriculture, biotechnology, healthcare, home automation, maintenance, manufacturing, pharmaceuticals, retail, security, service, supply chains, and transportation. T-ASE addresses a research community willing to integrate knowledge across disciplines and industries. For this purpose, each paper includes a Note to Practitioners that summarizes how its results can be applied or how they might be extended to apply in practice.
期刊最新文献
SETKNet: Stochastic Event-Triggered Kalman Net with Sensor Scheduling for Remote State Estimation Important-Data-Based Attack Strategy and Resilient H ∞ Estimator Design for Autonomous Vehicle Artificial reference-based terminal-free NMPC for autonomous parking among irregularly placed vehicles Canonical Correlation Residual Score-Based Method for Quality-Related Fault Diagnosis in Industrial Processes Robotic Bin Packing via Hierarchical Reinforcement Learning
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1