Deep convolutional networks based on lightweight YOLOv8 to detect and estimate peanut losses from images in post-harvesting environments

IF 8.9 1区 农林科学 Q1 AGRICULTURE, MULTIDISCIPLINARY Computers and Electronics in Agriculture Pub Date : 2025-07-01 Epub Date: 2025-03-19 DOI:10.1016/j.compag.2025.110282
Armando Lopes de Brito Filho , Franciele Morlin Carneiro , Vinicius dos Santos Carreira , Danilo Tedesco , Jarlyson Brunno Costa Souza , Marcelo Rodrigues Barbosa Júnior , Rouverson Pereira da Silva
{"title":"Deep convolutional networks based on lightweight YOLOv8 to detect and estimate peanut losses from images in post-harvesting environments","authors":"Armando Lopes de Brito Filho ,&nbsp;Franciele Morlin Carneiro ,&nbsp;Vinicius dos Santos Carreira ,&nbsp;Danilo Tedesco ,&nbsp;Jarlyson Brunno Costa Souza ,&nbsp;Marcelo Rodrigues Barbosa Júnior ,&nbsp;Rouverson Pereira da Silva","doi":"10.1016/j.compag.2025.110282","DOIUrl":null,"url":null,"abstract":"<div><div>Peanut losses detection is key to monitor operational quality during mechanical harvesting. Current manual assessments faces practical limitations in the field, as they tend to be exhaustive, time-consuming, and susceptible to errors, especially after long work periods. Therefore, the main objective of this study was to develop an automated image processing framework to detect, count, and estimate peanut pod losses during the harvesting operation. We proposed a robust approach encompassing different environmental conditions and training detection algorithms, specifically based on lightweight YOLOv8 architecture, with images acquired with a mobile smartphone at six different times of the day (10 a.m., 11 a.m., 1 p.m., 2 p.m., 3 p.m., and 4 p.m.). The experimental results showed that detecting two-seed peanut pods was more effective than one-seed pods, with higher precision, recall, and mAP50 values. The best results for image acquisition were between 10 a.m. and 2 p.m. The study also compared manual and automated counting methods, revealing that the best scenarios for counting achieved an R<sup>2</sup> above 0.80. Furthermore, georeferenced maps of peanut losses revealed significant spatial variability, providing critical insights for targeted interventions. These findings demonstrate the potential to enhance mechanized harvesting efficiency and lay the groundwork for future integration into fully automated systems. By incorporating this method into harvesting machinery, real-time monitoring and accurate loss quantification can be achieved, substantially reducing the need for labor-intensive manual assessments.</div></div>","PeriodicalId":50627,"journal":{"name":"Computers and Electronics in Agriculture","volume":"234 ","pages":"Article 110282"},"PeriodicalIF":8.9000,"publicationDate":"2025-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Computers and Electronics in Agriculture","FirstCategoryId":"97","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0168169925003886","RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2025/3/19 0:00:00","PubModel":"Epub","JCR":"Q1","JCRName":"AGRICULTURE, MULTIDISCIPLINARY","Score":null,"Total":0}
引用次数: 0

Abstract

Peanut losses detection is key to monitor operational quality during mechanical harvesting. Current manual assessments faces practical limitations in the field, as they tend to be exhaustive, time-consuming, and susceptible to errors, especially after long work periods. Therefore, the main objective of this study was to develop an automated image processing framework to detect, count, and estimate peanut pod losses during the harvesting operation. We proposed a robust approach encompassing different environmental conditions and training detection algorithms, specifically based on lightweight YOLOv8 architecture, with images acquired with a mobile smartphone at six different times of the day (10 a.m., 11 a.m., 1 p.m., 2 p.m., 3 p.m., and 4 p.m.). The experimental results showed that detecting two-seed peanut pods was more effective than one-seed pods, with higher precision, recall, and mAP50 values. The best results for image acquisition were between 10 a.m. and 2 p.m. The study also compared manual and automated counting methods, revealing that the best scenarios for counting achieved an R2 above 0.80. Furthermore, georeferenced maps of peanut losses revealed significant spatial variability, providing critical insights for targeted interventions. These findings demonstrate the potential to enhance mechanized harvesting efficiency and lay the groundwork for future integration into fully automated systems. By incorporating this method into harvesting machinery, real-time monitoring and accurate loss quantification can be achieved, substantially reducing the need for labor-intensive manual assessments.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
基于轻量级YOLOv8的深度卷积网络检测和估计收获后环境中图像中的花生损失
花生损耗检测是机械收获过程中监控操作质量的关键。目前的手工评估在该领域面临着实际的限制,因为它们往往是详尽的、耗时的,并且容易出错,特别是在长时间的工作之后。因此,本研究的主要目的是开发一个自动图像处理框架,以检测、计数和估计收获过程中的花生荚损失。我们提出了一种鲁棒的方法,包括不同的环境条件和训练检测算法,特别是基于轻量级的YOLOv8架构,使用移动智能手机在一天中的六个不同时间(上午10点、上午11点、下午1点、下午2点、下午3点和下午4点)获取图像。实验结果表明,双籽花生豆荚检测比单籽花生豆荚检测更有效,具有更高的准确率、召回率和mAP50值。图像采集的最佳时间是上午10点到下午2点。该研究还比较了手动和自动计数方法,揭示了计数的最佳方案达到了0.80以上的R2。此外,花生损失的地理参考地图揭示了显著的空间变异性,为有针对性的干预提供了重要见解。这些发现证明了提高机械化收获效率的潜力,并为未来集成到全自动系统奠定了基础。通过将这种方法应用到收获机械中,可以实现实时监控和准确的损失量化,大大减少了对劳动密集型人工评估的需求。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
Computers and Electronics in Agriculture
Computers and Electronics in Agriculture 工程技术-计算机:跨学科应用
CiteScore
15.30
自引率
14.50%
发文量
800
审稿时长
62 days
期刊介绍: Computers and Electronics in Agriculture provides international coverage of advancements in computer hardware, software, electronic instrumentation, and control systems applied to agricultural challenges. Encompassing agronomy, horticulture, forestry, aquaculture, and animal farming, the journal publishes original papers, reviews, and applications notes. It explores the use of computers and electronics in plant or animal agricultural production, covering topics like agricultural soils, water, pests, controlled environments, and waste. The scope extends to on-farm post-harvest operations and relevant technologies, including artificial intelligence, sensors, machine vision, robotics, networking, and simulation modeling. Its companion journal, Smart Agricultural Technology, continues the focus on smart applications in production agriculture.
期刊最新文献
Tech-driven evolution of animal housing: an in-depth analysis of the impact of digital technologies, AI, and GenAI in the Era of precision livestock farming A robotic harvesting system for occluded cucumbers using F2SA-YOLOv8 and HVSC MCS-YOLO: A novel remote sensing image segmentation algorithm for mountain crops A generalization and lightweight recognition for citrus fruit harvesting based on improving YOLOv8 LeafRemoval-YOLO-K: A hybrid visual recognition network for stem-petiole segmentation and cutting point localization in tomato plants
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1