Alfalfa detection and stem count from proximal images using a combination of deep neural networks and machine learning

IF 8.9 1区 农林科学 Q1 AGRICULTURE, MULTIDISCIPLINARY Computers and Electronics in Agriculture Pub Date : 2025-05-01 Epub Date: 2025-02-16 DOI:10.1016/j.compag.2025.110115
Hazhir Bahrami , Karem Chokmani , Saeid Homayouni , Viacheslav I. Adamchuk , Md Saifuzzaman , Maxime Leduc
{"title":"Alfalfa detection and stem count from proximal images using a combination of deep neural networks and machine learning","authors":"Hazhir Bahrami ,&nbsp;Karem Chokmani ,&nbsp;Saeid Homayouni ,&nbsp;Viacheslav I. Adamchuk ,&nbsp;Md Saifuzzaman ,&nbsp;Maxime Leduc","doi":"10.1016/j.compag.2025.110115","DOIUrl":null,"url":null,"abstract":"<div><div>Among various types of forages, Alfalfa (Medicago sativa) is a crucial forage crop that plays a vital role in livestock nutrition and sustainable agriculture. As a result of its ability to adapt to different weather conditions and its high nitrogen fixation capability, this crop produces high-quality forage that contains between 15 and 22 % protein. It is fortunately possible to improve the overall prediction of forage biomass and quality prior to harvest through remote sensing technologies. The recent advent of deep Convolution Neural Networks (deep CNNs) enables researchers to utilize these incredible algorithms. This study aims to build a model to count the number of alfalfa stems from proximal images. To this end, we first utilized a deep CNN encoder-decoder to segment alfalfa and other background objects in a field, such as soil and grass. Subsequently, we employed the alfalfa cover fractions derived from the proximal images to develop and train machine learning regression models for estimating the stem count in the images. This study uses many proximal images taken from significant number of fields in four provinces of Canada over three consecutive years. A combination of real and synthetic images has been utilized to feed the deep neural network encoder-decoder. This study gathered roughly 3447 alfalfa images, 5332 grass images, and 9241 background images for training the encoder-decoder model. With data augmentation, we prepared about 60,000 annotated images of alfalfa fields containing alfalfa, grass, and background utilizing a pre-trained model in less than an hour. Several convolutional neural network encoder-decoder models have also been utilized in this study. Simple U-Net, Attention U-Net (Att U-Net), and ResU-Net with attention gates have been trained to detect alfalfa and differentiate it from other objects. The best Intersections over Union (IoU) for simple U-Net classes were 0.98, 0.93, and 0.80 for background, alfalfa and grass, respectively. Simple U-Net with synthetic data provides a promising result over unseen real images and requires an RGB iPad image for field-specific alfalfa detection. It was also observed that simple U-Net has slightly better accuracy than attention U-Net and attention ResU-Net. Finally, we built regression models between the alfalfa cover fraction in the original images taken by iPad, and the mean alfalfa stems per square foot. Random forest (RF), Support Vector Regression (SVR), and Extreme Gradient Boosting (XGB) methods have been utilized to estimate the number of stems in the images. RF was the best model for estimating the number of alfalfa stems relative to other machine learning algorithms, with a coefficient of determination (R<sup>2</sup>) of 0.82, root-mean-square error of 13.00, and mean absolute error of 10.07.</div></div>","PeriodicalId":50627,"journal":{"name":"Computers and Electronics in Agriculture","volume":"232 ","pages":"Article 110115"},"PeriodicalIF":8.9000,"publicationDate":"2025-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Computers and Electronics in Agriculture","FirstCategoryId":"97","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0168169925002212","RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2025/2/16 0:00:00","PubModel":"Epub","JCR":"Q1","JCRName":"AGRICULTURE, MULTIDISCIPLINARY","Score":null,"Total":0}
引用次数: 0

Abstract

Among various types of forages, Alfalfa (Medicago sativa) is a crucial forage crop that plays a vital role in livestock nutrition and sustainable agriculture. As a result of its ability to adapt to different weather conditions and its high nitrogen fixation capability, this crop produces high-quality forage that contains between 15 and 22 % protein. It is fortunately possible to improve the overall prediction of forage biomass and quality prior to harvest through remote sensing technologies. The recent advent of deep Convolution Neural Networks (deep CNNs) enables researchers to utilize these incredible algorithms. This study aims to build a model to count the number of alfalfa stems from proximal images. To this end, we first utilized a deep CNN encoder-decoder to segment alfalfa and other background objects in a field, such as soil and grass. Subsequently, we employed the alfalfa cover fractions derived from the proximal images to develop and train machine learning regression models for estimating the stem count in the images. This study uses many proximal images taken from significant number of fields in four provinces of Canada over three consecutive years. A combination of real and synthetic images has been utilized to feed the deep neural network encoder-decoder. This study gathered roughly 3447 alfalfa images, 5332 grass images, and 9241 background images for training the encoder-decoder model. With data augmentation, we prepared about 60,000 annotated images of alfalfa fields containing alfalfa, grass, and background utilizing a pre-trained model in less than an hour. Several convolutional neural network encoder-decoder models have also been utilized in this study. Simple U-Net, Attention U-Net (Att U-Net), and ResU-Net with attention gates have been trained to detect alfalfa and differentiate it from other objects. The best Intersections over Union (IoU) for simple U-Net classes were 0.98, 0.93, and 0.80 for background, alfalfa and grass, respectively. Simple U-Net with synthetic data provides a promising result over unseen real images and requires an RGB iPad image for field-specific alfalfa detection. It was also observed that simple U-Net has slightly better accuracy than attention U-Net and attention ResU-Net. Finally, we built regression models between the alfalfa cover fraction in the original images taken by iPad, and the mean alfalfa stems per square foot. Random forest (RF), Support Vector Regression (SVR), and Extreme Gradient Boosting (XGB) methods have been utilized to estimate the number of stems in the images. RF was the best model for estimating the number of alfalfa stems relative to other machine learning algorithms, with a coefficient of determination (R2) of 0.82, root-mean-square error of 13.00, and mean absolute error of 10.07.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
使用深度神经网络和机器学习相结合的近端图像进行苜蓿检测和茎计数
在各类牧草中,苜蓿(Medicago sativa)是重要的饲料作物,对家畜营养和农业可持续发展起着至关重要的作用。由于其适应不同天气条件的能力和高固氮能力,这种作物生产的优质饲料含有15%至22%的蛋白质。幸运的是,通过遥感技术可以在收获前改善牧草生物量和质量的总体预测。最近出现的深度卷积神经网络(deep cnn)使研究人员能够利用这些令人难以置信的算法。本研究旨在建立一个模型来统计近端图像中紫花苜蓿茎的数量。为此,我们首先利用深度CNN编码器-解码器对苜蓿和田野中的其他背景物体(如土壤和草)进行分割。随后,我们使用来自近端图像的苜蓿覆盖分数来开发和训练机器学习回归模型,以估计图像中的茎数。本研究使用了连续三年从加拿大四个省的大量油田拍摄的许多近端图像。利用真实图像和合成图像的组合来馈送深度神经网络编解码器。本研究收集了大约3447张苜蓿图像、5332张草图像和9241张背景图像,用于训练编码器-解码器模型。通过数据增强,我们利用预训练模型在不到一个小时的时间内准备了大约60,000张包含苜蓿,草和背景的苜蓿田注释图像。在本研究中还使用了几种卷积神经网络编码器-解码器模型。简单U-Net、注意U-Net (Att U-Net)和带注意门的ResU-Net已经被训练来检测苜蓿并将其与其他物体区分开来。简单U-Net分类中,背景、紫花苜蓿和禾本科的最佳联交值分别为0.98、0.93和0.80。简单的U-Net与合成数据提供了有希望的结果比看不见的真实图像,需要一个RGB iPad图像用于特定领域的苜蓿检测。简单U-Net的准确率略高于注意U-Net和注意ResU-Net。最后,我们建立了iPad原始图像中苜蓿覆盖分数与每平方英尺平均苜蓿茎数之间的回归模型。随机森林(RF)、支持向量回归(SVR)和极限梯度增强(XGB)方法被用来估计图像中的茎的数量。相对于其他机器学习算法,RF是估计苜蓿茎数的最佳模型,其决定系数(R2)为0.82,均方根误差为13.00,平均绝对误差为10.07。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
Computers and Electronics in Agriculture
Computers and Electronics in Agriculture 工程技术-计算机:跨学科应用
CiteScore
15.30
自引率
14.50%
发文量
800
审稿时长
62 days
期刊介绍: Computers and Electronics in Agriculture provides international coverage of advancements in computer hardware, software, electronic instrumentation, and control systems applied to agricultural challenges. Encompassing agronomy, horticulture, forestry, aquaculture, and animal farming, the journal publishes original papers, reviews, and applications notes. It explores the use of computers and electronics in plant or animal agricultural production, covering topics like agricultural soils, water, pests, controlled environments, and waste. The scope extends to on-farm post-harvest operations and relevant technologies, including artificial intelligence, sensors, machine vision, robotics, networking, and simulation modeling. Its companion journal, Smart Agricultural Technology, continues the focus on smart applications in production agriculture.
期刊最新文献
An end-to-end video-based action recognition pipeline for precision monitoring and success scoring of broiler breeder mating behavior ChatCEA: a knowledge-driven intelligent service agent for controlled environment agriculture A spectral synergy correction model: Optimizing satellite spectra for improved soil organic carbon monitoring A deep ensemble learning framework for precision image-level weed classification in paddy fields CFD-DEM-based design of UAV precision seeding device and rice seed trajectory study
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1