ONLINE DETECTION SYSTEM FOR CRUSHED RATE AND IMPURITY RATE OF MECHANIZED SOYBEAN BASED ON DEEPLABV3+

IF 0.6 Q4 AGRICULTURAL ENGINEERING INMATEH-Agricultural Engineering Pub Date : 2023-08-17 DOI:10.35633/inmateh-70-48
Man Chen, Gong Cheng, Jinshan Xu, Guangyue Zhang, Chengqian Jin
{"title":"ONLINE DETECTION SYSTEM FOR CRUSHED RATE AND IMPURITY RATE OF MECHANIZED SOYBEAN BASED ON DEEPLABV3+","authors":"Man Chen, Gong Cheng, Jinshan Xu, Guangyue Zhang, Chengqian Jin","doi":"10.35633/inmateh-70-48","DOIUrl":null,"url":null,"abstract":"In this study, an online detection system of soybean crushed rate and impurity rate based on DeepLabV3+model was constructed. Three feature extraction networks, namely the MobileNetV2, Xception-65, and ResNet-50 models, were adopted to obtain the best DeepLabV3+model through test analysis. Two well-established semantic segmentation networks, the improved U-Net and PSPNet, are used for mechanically harvested soybean image recognition and segmentation, and their performances are compared with the DeepLabV3+ model’s performance. The results show that, of all the models, the improved U-Net has the best segmentation performance, achieving a mean intersection over union (FMIOU) value of 0.8326. The segmentation performance of the DeepLabV3+ model using the MobileNetV2 is similar to that of the U-Net, achieving FMIOU of 0.8180. The DeepLabV3+ model using the MobileNetV2 has a fast segmentation speed of 168.6 ms per image. Taking manual detection results as a benchmark, the maximum absolute and relative errors of the impurity rate of the detection system based on the DeepLabV3+ model with the MobileNetV2 of mechanized soybean harvesting operation are 0.06% and 8.11%, respectively. The maximum absolute and relative errors of the crushed rate of the same system are 0.34% and 9.53%, respectively.","PeriodicalId":44197,"journal":{"name":"INMATEH-Agricultural Engineering","volume":" ","pages":""},"PeriodicalIF":0.6000,"publicationDate":"2023-08-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"INMATEH-Agricultural Engineering","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.35633/inmateh-70-48","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q4","JCRName":"AGRICULTURAL ENGINEERING","Score":null,"Total":0}
引用次数: 0

Abstract

In this study, an online detection system of soybean crushed rate and impurity rate based on DeepLabV3+model was constructed. Three feature extraction networks, namely the MobileNetV2, Xception-65, and ResNet-50 models, were adopted to obtain the best DeepLabV3+model through test analysis. Two well-established semantic segmentation networks, the improved U-Net and PSPNet, are used for mechanically harvested soybean image recognition and segmentation, and their performances are compared with the DeepLabV3+ model’s performance. The results show that, of all the models, the improved U-Net has the best segmentation performance, achieving a mean intersection over union (FMIOU) value of 0.8326. The segmentation performance of the DeepLabV3+ model using the MobileNetV2 is similar to that of the U-Net, achieving FMIOU of 0.8180. The DeepLabV3+ model using the MobileNetV2 has a fast segmentation speed of 168.6 ms per image. Taking manual detection results as a benchmark, the maximum absolute and relative errors of the impurity rate of the detection system based on the DeepLabV3+ model with the MobileNetV2 of mechanized soybean harvesting operation are 0.06% and 8.11%, respectively. The maximum absolute and relative errors of the crushed rate of the same system are 0.34% and 9.53%, respectively.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
基于DEEPLABV3的机械化大豆粉碎率和杂质率在线检测系统+
本研究构建了基于DeepLabV3+模型的大豆粉碎率和杂质率在线检测系统。采用MobileNetV2、exception -65、ResNet-50三种特征提取网络,通过测试分析获得最佳DeepLabV3+模型。将改进的U-Net和PSPNet两种成熟的语义分割网络用于机械收获大豆图像的识别和分割,并将其性能与DeepLabV3+模型的性能进行比较。结果表明,在所有模型中,改进的U-Net模型的分割性能最好,平均FMIOU值为0.8326。使用MobileNetV2的DeepLabV3+模型的分割性能与U-Net相似,FMIOU为0.8180。使用MobileNetV2的DeepLabV3+模型具有每幅图像168.6 ms的快速分割速度。以人工检测结果为基准,基于DeepLabV3+模型和MobileNetV2的大豆机械化收获作业的杂质率检测系统的最大绝对误差和相对误差分别为0.06%和8.11%。同一体系破碎率的最大绝对误差和相对误差分别为0.34%和9.53%。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
INMATEH-Agricultural Engineering
INMATEH-Agricultural Engineering AGRICULTURAL ENGINEERING-
CiteScore
1.30
自引率
57.10%
发文量
98
期刊最新文献
TECHNICAL AND ENVIRONMENTAL EVALUATION OF USING RICE HUSKS AND SOLAR ENERGY ON THE ACTIVATION OF ABSORPTION CHILLERS IN THE CARIBBEAN REGION. CASE STUDY: BARRANQUILLA ALGORITHM FOR OPTIMIZING THE MOVEMENT OF A MOUNTED MACHINETRACTOR UNIT IN THE HEADLAND OF AN IRREGULARLY SHAPED FIELD STUDY ON THE INFLUENCE OF PCA PRE-TREATMENT ON PIG FACE IDENTIFICATION WITH KNN IoT-BASED EVAPOTRANSPIRATION ESTIMATION OF PEANUT PLANT USING DEEP NEURAL NETWORK DESIGN AND EXPERIMENT OF A SINGLE-ROW SMALL GRAIN PRECISION SEEDER
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1