Pub Date : 2026-01-03DOI: 10.1007/s11119-025-10313-6
Gabriel Dumbá Monteiro de Castro, Daniel Marçal de Queiroz, Domingos Sárvio Magalhães Valente, Diego Bedin Marin, Ryan Moreira Borges
{"title":"Early prediction of coffee production per plant using morphological indices","authors":"Gabriel Dumbá Monteiro de Castro, Daniel Marçal de Queiroz, Domingos Sárvio Magalhães Valente, Diego Bedin Marin, Ryan Moreira Borges","doi":"10.1007/s11119-025-10313-6","DOIUrl":"https://doi.org/10.1007/s11119-025-10313-6","url":null,"abstract":"","PeriodicalId":20423,"journal":{"name":"Precision Agriculture","volume":"3 1","pages":""},"PeriodicalIF":6.2,"publicationDate":"2026-01-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145894510","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-03DOI: 10.1007/s11119-025-10312-7
Carlos Castillo, Encarnación V. Taguas, Miguel Vallejo, Rafael Pérez, Robert R. Wells, Ronald L. Bingner, Helena Gómez-MacPherson
{"title":"Metrics of soil degradation by recent filling of permanent gullies: a study case on annual rainfed crops at the Campiña landscape (Spain)","authors":"Carlos Castillo, Encarnación V. Taguas, Miguel Vallejo, Rafael Pérez, Robert R. Wells, Ronald L. Bingner, Helena Gómez-MacPherson","doi":"10.1007/s11119-025-10312-7","DOIUrl":"https://doi.org/10.1007/s11119-025-10312-7","url":null,"abstract":"","PeriodicalId":20423,"journal":{"name":"Precision Agriculture","volume":"36 1","pages":""},"PeriodicalIF":6.2,"publicationDate":"2026-01-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145894511","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-27DOI: 10.1007/s11119-025-10308-3
Larissa Gui Pagliuca, Marcelo José Carrer, Rodrigo Damasceno, Marcela de Mello Brandão Vinholis, Hildo Meirelles de Souza Filho
{"title":"Precision agriculture technologies adoption and technical efficiency of soybean farms in São Paulo, Brazil","authors":"Larissa Gui Pagliuca, Marcelo José Carrer, Rodrigo Damasceno, Marcela de Mello Brandão Vinholis, Hildo Meirelles de Souza Filho","doi":"10.1007/s11119-025-10308-3","DOIUrl":"https://doi.org/10.1007/s11119-025-10308-3","url":null,"abstract":"","PeriodicalId":20423,"journal":{"name":"Precision Agriculture","volume":"116 1","pages":""},"PeriodicalIF":6.2,"publicationDate":"2025-12-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145836234","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-27DOI: 10.1007/s11119-025-10311-8
Laura Delgado Bejarano, Agda Loureiro Gonçalves Oliveira, João Vitor Fiolo Pozzuto, Dario Castañeda Sánchez, Lucas Rios do Amaral
{"title":"Performance of interpolation methods in digital soil mapping: the influence of data characteristics","authors":"Laura Delgado Bejarano, Agda Loureiro Gonçalves Oliveira, João Vitor Fiolo Pozzuto, Dario Castañeda Sánchez, Lucas Rios do Amaral","doi":"10.1007/s11119-025-10311-8","DOIUrl":"https://doi.org/10.1007/s11119-025-10311-8","url":null,"abstract":"","PeriodicalId":20423,"journal":{"name":"Precision Agriculture","volume":"27 1","pages":""},"PeriodicalIF":6.2,"publicationDate":"2025-12-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145836232","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-27DOI: 10.1007/s11119-025-10309-2
G. A. Mesías-Ruiz, I. Borra-Serrano, J. Dorado, A. I. de Castro, J. M. Peña
Purpose: Accurate identification and mapping of multiple weed species at early growth stages is a critical step toward operational site-specific weed management (SSWM), yet most UAV-based studies have so far been limited to broad weed categories or single dominant species. This study aimed to evaluate and compare deep learning models for multispecies weed classification, detection and mapping in maize and tomato fields using UAV-based RGB imagery. Methods: Three convolutional neural networks (Inception-ResNet-v2, EfficientNet-B0, YOLOv8) and two Vision Transformers (ViT-Base, Swin-T) were assessed for the classification of nine common weed species. The two best-performing classifiers were then implemented in object detection frameworks (YOLOv8m and DETA), and species-specific treatment maps were generated using adaptive economic weed thresholds applied to gridded density weed data. Results: Swin-T and YOLOv8 achieved the highest classification metrics, with weighted F1-scores of 98.1% and 97.0%, respectively. For object detection, YOLOv8m outperformed DETA, reaching a mean Average Precision of 0.93 and a recall of 0.94, while substantially reducing inference time. The multispecies treatment maps revealed over 70% of weed-free areas, indicating the potential benefits of cost-saving approaches compared to uniform full-field treatments. Conclusions: The proposed workflow enabled accurate multispecies weed classification, detection and mapping at early growth stages, providing valuable inputs for decision support systems and smart sprayers to gradually advance SSWM for a more selective, efficient and sustainable weed control.
{"title":"Multispecies weed mapping using deep learning on UAV imagery for SSWM in maize and tomato","authors":"G. A. Mesías-Ruiz, I. Borra-Serrano, J. Dorado, A. I. de Castro, J. M. Peña","doi":"10.1007/s11119-025-10309-2","DOIUrl":"https://doi.org/10.1007/s11119-025-10309-2","url":null,"abstract":"Purpose: Accurate identification and mapping of multiple weed species at early growth stages is a critical step toward operational site-specific weed management (SSWM), yet most UAV-based studies have so far been limited to broad weed categories or single dominant species. This study aimed to evaluate and compare deep learning models for multispecies weed classification, detection and mapping in maize and tomato fields using UAV-based RGB imagery. Methods: Three convolutional neural networks (Inception-ResNet-v2, EfficientNet-B0, YOLOv8) and two Vision Transformers (ViT-Base, Swin-T) were assessed for the classification of nine common weed species. The two best-performing classifiers were then implemented in object detection frameworks (YOLOv8m and DETA), and species-specific treatment maps were generated using adaptive economic weed thresholds applied to gridded density weed data. Results: Swin-T and YOLOv8 achieved the highest classification metrics, with weighted F1-scores of 98.1% and 97.0%, respectively. For object detection, YOLOv8m outperformed DETA, reaching a mean Average Precision of 0.93 and a recall of 0.94, while substantially reducing inference time. The multispecies treatment maps revealed over 70% of weed-free areas, indicating the potential benefits of cost-saving approaches compared to uniform full-field treatments. Conclusions: The proposed workflow enabled accurate multispecies weed classification, detection and mapping at early growth stages, providing valuable inputs for decision support systems and smart sprayers to gradually advance SSWM for a more selective, efficient and sustainable weed control.","PeriodicalId":20423,"journal":{"name":"Precision Agriculture","volume":"10 1","pages":""},"PeriodicalIF":6.2,"publicationDate":"2025-12-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145836233","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-22DOI: 10.1007/s11119-025-10301-w
Leon Weigelt, Matthias Wengert, Michael Wachendorf, Jayan Wijesingha
Accurate and timely forage yield prediction in alfalfa-grass mixtures (AGM) is essential for supporting precision agriculture management decisions. This study aimed to develop and evaluate UAV-borne remote sensing models to predict total dry matter yield (DMY) and legume dry matter yield (LY) across multiple harvests and field sites. UAV-borne high-resolution true-colour images were used to derive canopy height models via structure-from-motion. At the same time, multispectral imagery enabled the calculation of reflectance-based vegetation indices. Biomass was destructively sampled, and DMY and LY were determined through drying and botanical fractioning. A total of 276 biomass samples were collected over four harvests, including samples from three AGM fields. To predict DMY and LY, two machine learning regression models (random forest and extreme gradient boosting) were trained and validated using leave-spatial-temporal-group-out cross-validation to ensure robustness across locations and time. Random forest models using fused spectral and height data achieved the best performance, with median prediction errors of 0.51 t ha⁻¹ for DMY (median R² = 0.49) and 0.40 t ha⁻¹ for LY (median R² = 0.65), demonstrating good generalizability under varying agronomic conditions. The study highlights the potential of combining UAV-borne height and spectral data for high-resolution yield mapping in complex forage systems. Predictive maps of DMY and LY provide spatial insights that can inform management and support sustainable nitrogen cycling in crop rotations.
准确、及时地预测紫花苜蓿-草混合作物的饲料产量是支持精准农业经营决策的重要依据。本研究旨在开发和评估无人机遥感模型,以预测豆科作物在不同收获和不同场点的总干物质产量(DMY)和干物质产量(LY)。利用无人机携带的高分辨率真彩图像,通过运动结构推导出树冠高度模型。同时,多光谱影像可以计算基于反射率的植被指数。对生物量进行破坏性取样,并通过干燥和植物分馏测定DMY和LY。在四次收获中共采集了276个生物量样本,其中包括三个AGM田的样本。为了预测DMY和LY,我们对两个机器学习回归模型(随机森林和极端梯度增强)进行了训练,并使用leave-spatial-temporal-group-out交叉验证进行了验证,以确保跨地点和时间的鲁棒性。使用融合光谱和高度数据的随机森林模型取得了最好的效果,DMY的中位数预测误差为0.51 t ha⁻¹(中位数R²= 0.49),LY的中位数预测误差为0.40 t ha⁻¹(中位数R²= 0.65),在不同的农学条件下表现出良好的泛化性。该研究强调了将无人机运载的高度和光谱数据结合起来,在复杂的饲料系统中进行高分辨率产量测绘的潜力。DMY和LY的预测图提供了空间洞察力,可以为管理提供信息,并支持作物轮作中的可持续氮循环。
{"title":"Spatio-temporal prediction of total and legume dry matter yield using UAV-borne RGB and multispectral images in alfalfa-grass mixtures","authors":"Leon Weigelt, Matthias Wengert, Michael Wachendorf, Jayan Wijesingha","doi":"10.1007/s11119-025-10301-w","DOIUrl":"https://doi.org/10.1007/s11119-025-10301-w","url":null,"abstract":"Accurate and timely forage yield prediction in alfalfa-grass mixtures (AGM) is essential for supporting precision agriculture management decisions. This study aimed to develop and evaluate UAV-borne remote sensing models to predict total dry matter yield (DMY) and legume dry matter yield (LY) across multiple harvests and field sites. UAV-borne high-resolution true-colour images were used to derive canopy height models via structure-from-motion. At the same time, multispectral imagery enabled the calculation of reflectance-based vegetation indices. Biomass was destructively sampled, and DMY and LY were determined through drying and botanical fractioning. A total of 276 biomass samples were collected over four harvests, including samples from three AGM fields. To predict DMY and LY, two machine learning regression models (random forest and extreme gradient boosting) were trained and validated using leave-spatial-temporal-group-out cross-validation to ensure robustness across locations and time. Random forest models using fused spectral and height data achieved the best performance, with median prediction errors of 0.51 t ha⁻¹ for DMY (median R² = 0.49) and 0.40 t ha⁻¹ for LY (median R² = 0.65), demonstrating good generalizability under varying agronomic conditions. The study highlights the potential of combining UAV-borne height and spectral data for high-resolution yield mapping in complex forage systems. Predictive maps of DMY and LY provide spatial insights that can inform management and support sustainable nitrogen cycling in crop rotations.","PeriodicalId":20423,"journal":{"name":"Precision Agriculture","volume":"2 1","pages":""},"PeriodicalIF":6.2,"publicationDate":"2025-12-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145807702","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-10DOI: 10.1007/s11119-025-10295-5
Rodrigo Damasceno, Marcelo José Carrer, Larissa Gui Pagliuca, Marcela de Mello Brandão Vinholis, Hildo Meirelles de Souza Filho
{"title":"What drives the adoption of digital technology? An empirical assessment of multiple technology adoption by soybean farmers in São Paulo, Brazil","authors":"Rodrigo Damasceno, Marcelo José Carrer, Larissa Gui Pagliuca, Marcela de Mello Brandão Vinholis, Hildo Meirelles de Souza Filho","doi":"10.1007/s11119-025-10295-5","DOIUrl":"https://doi.org/10.1007/s11119-025-10295-5","url":null,"abstract":"","PeriodicalId":20423,"journal":{"name":"Precision Agriculture","volume":"141 1","pages":""},"PeriodicalIF":6.2,"publicationDate":"2025-12-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145711458","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-09DOI: 10.1007/s11119-025-10302-9
Anastasiia Safonova, Stefan Stiller, Momchil Yordanov, Masahiro Ryo
Purpose One of the most pervasive Artificial Intelligence (AI) methodologies utilized in the domain of agriculture for image-based classification purposes is Supervised Learning (SL). However, SL depends on a large amount of annotation effort and is susceptible to overfitting to the given prediction task. Self-Supervised Learning (SSL) is a novel training paradigm with the potential to address these issues, while its potential has not been investigated in the agriculture domain. This paper presents the initial experimental investigation and comparison of SL and SSL for the classification of agricultural images in the context of limited samples. Methods We used an agricultural subset of the Land Use and Cover Area Frame Survey (LUCAS) dataset serving as a case study. In total, it comprised 1,000 images for each of the 10 crops: common wheat, barley, oats, maize, potatoes, sugar beet, sunflower, rape and turnip rape, soya, and temporary grassland. For SL, we trained popular and frequently used Convolutional Neural Network (CNN) architectures such as VGG16, Inception, ResNet-18/50, SqueezeNet, ResNeXt-50, MobileNet-V2, ShuffleNet, EfficientNet-V2, and ConvNeXt Tiny with and without data augmentations. For SSL, the best-performing CNN architectures (ResNet-18, ResNet-50, and ResNeXt-50) were further tested. The architectures were pre-trained with the VICReg algorithm (Variance Invariance Covariance Regularization) and fine-tuned successively using supervision for crop type classification. Results Our results demonstrate that the SSL models can distinguish crop types (common wheat, barley, oats, maize, potatoes, sugar beet, sunflower, rape, soya, and grassland) even without labels based solely on morphological features and organize them into three semantically meaningful visual groups: cereal-like and grassland crops, upright broadleaf crops, and low-growing broadleaf crops. The fine-tuned models, particularly ResNeXt-50, achieved superior performance compared to any of the SLs. Notably, we show that the fine-tuned SSL models outperformed the best-performing SL models by using only 5% of the labeled training data for fine-tuning, corresponding to a small and balanced subset of the training split. Conclusion These findings highlight the potential of SSL for improving classification efficiency and generalization under limited data availability conditions in agriculture applications, providing a viable path toward more efficient agricultural monitoring systems.
{"title":"Self-supervised learning outperforms supervised learning for crop classification by annotating only 5% of images","authors":"Anastasiia Safonova, Stefan Stiller, Momchil Yordanov, Masahiro Ryo","doi":"10.1007/s11119-025-10302-9","DOIUrl":"https://doi.org/10.1007/s11119-025-10302-9","url":null,"abstract":"Purpose One of the most pervasive Artificial Intelligence (AI) methodologies utilized in the domain of agriculture for image-based classification purposes is Supervised Learning (SL). However, SL depends on a large amount of annotation effort and is susceptible to overfitting to the given prediction task. Self-Supervised Learning (SSL) is a novel training paradigm with the potential to address these issues, while its potential has not been investigated in the agriculture domain. This paper presents the initial experimental investigation and comparison of SL and SSL for the classification of agricultural images in the context of limited samples. Methods We used an agricultural subset of the Land Use and Cover Area Frame Survey (LUCAS) dataset serving as a case study. In total, it comprised 1,000 images for each of the 10 crops: common wheat, barley, oats, maize, potatoes, sugar beet, sunflower, rape and turnip rape, soya, and temporary grassland. For SL, we trained popular and frequently used Convolutional Neural Network (CNN) architectures such as VGG16, Inception, ResNet-18/50, SqueezeNet, ResNeXt-50, MobileNet-V2, ShuffleNet, EfficientNet-V2, and ConvNeXt Tiny with and without data augmentations. For SSL, the best-performing CNN architectures (ResNet-18, ResNet-50, and ResNeXt-50) were further tested. The architectures were pre-trained with the VICReg algorithm (Variance Invariance Covariance Regularization) and fine-tuned successively using supervision for crop type classification. Results Our results demonstrate that the SSL models can distinguish crop types (common wheat, barley, oats, maize, potatoes, sugar beet, sunflower, rape, soya, and grassland) even without labels based solely on morphological features and organize them into three semantically meaningful visual groups: cereal-like and grassland crops, upright broadleaf crops, and low-growing broadleaf crops. The fine-tuned models, particularly ResNeXt-50, achieved superior performance compared to any of the SLs. Notably, we show that the fine-tuned SSL models outperformed the best-performing SL models by using only 5% of the labeled training data for fine-tuning, corresponding to a small and balanced subset of the training split. Conclusion These findings highlight the potential of SSL for improving classification efficiency and generalization under limited data availability conditions in agriculture applications, providing a viable path toward more efficient agricultural monitoring systems.","PeriodicalId":20423,"journal":{"name":"Precision Agriculture","volume":"30 1","pages":""},"PeriodicalIF":6.2,"publicationDate":"2025-12-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145703991","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-09DOI: 10.1007/s11119-025-10303-8
J. Rakun, G. Popič
Purpose This study introduces an optimized algorithm for autonomous navigation of field robots, aiming to improve navigation accuracy, reduce crop damage and shorten execution times in agricultural environments. Methods The enhanced solution integrates advanced data filtering with sensor fusion techniques, combining LiDAR and IMU inputs to produce precise 3D point cloud representations for reliable navigation in structured crop rows. Both the legacy and improved algorithms were evaluated through simulation and physical trials on the FarmBeast robotic platform. Results The improved algorithm reduced traversal time by up to 33% on certain field sections and lowered crop damage by 25% compared to the previous version. Conclusions Results confirm the robustness and effectiveness of the enhanced navigation system in complex agricultural field conditions, demonstrating its potential for practical deployment within farming automation.
{"title":"Optimized autonomous navigation for field robots: extended results and practical deployment","authors":"J. Rakun, G. Popič","doi":"10.1007/s11119-025-10303-8","DOIUrl":"https://doi.org/10.1007/s11119-025-10303-8","url":null,"abstract":"Purpose This study introduces an optimized algorithm for autonomous navigation of field robots, aiming to improve navigation accuracy, reduce crop damage and shorten execution times in agricultural environments. Methods The enhanced solution integrates advanced data filtering with sensor fusion techniques, combining LiDAR and IMU inputs to produce precise 3D point cloud representations for reliable navigation in structured crop rows. Both the legacy and improved algorithms were evaluated through simulation and physical trials on the FarmBeast robotic platform. Results The improved algorithm reduced traversal time by up to 33% on certain field sections and lowered crop damage by 25% compared to the previous version. Conclusions Results confirm the robustness and effectiveness of the enhanced navigation system in complex agricultural field conditions, demonstrating its potential for practical deployment within farming automation.","PeriodicalId":20423,"journal":{"name":"Precision Agriculture","volume":"138 1","pages":""},"PeriodicalIF":6.2,"publicationDate":"2025-12-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145703992","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}