首页 > 最新文献

Computers and Electronics in Agriculture最新文献

英文 中文
MTA-SM: Multi-machine path planning and time-window scheduling joint optimization method in hilly safflower harvesting 丘陵红花采收多机路径规划与时间窗调度联合优化方法
IF 8.9 1区 农林科学 Q1 AGRICULTURE, MULTIDISCIPLINARY Pub Date : 2026-01-27 DOI: 10.1016/j.compag.2026.111471
Liwei Yang , Ping Li , Zhongshuo Ding , Chao Sun , Fei Chen , Yijiang Zheng , Yun Ge
With the development of agricultural intelligence, mechanized harvesting of crops characterized by short picking periods and growth-pattern-dependent optimal harvest times faces dual challenges in path planning and multi-machine collaborative scheduling. To tackle the complex scenario of safflower harvesting in hilly terrains, this paper proposes a Multi-Task Assignment and Scheduling Mechanism (MTA-SM) aimed at achieving both full-coverage picking paths and time-window-constrained scheduling for multiple machines. The system consists of two major modules: for path planning, a terrain adaptation factor is introduced to improve the Coverage Path Planning (CPP) algorithm, and the turning strategy of harvesters is optimized to reduce ineffective movements and enhance operational coverage. For scheduling, a Vehicle Routing Problem with Time Windows (VRPTW) model is formulated, and an improved Ant Colony Optimization (ACO) algorithm with a dynamic pheromone updating mechanism is employed to realize coordinated scheduling among multiple machines, thereby minimizing path conflicts and idle time. Simulation results indicate that the MTA-SM system not only optimizes the operational path of a single harvester but also significantly enhances the efficiency and resource utilization of multi-machine collaboration. This provides a practical and intelligent solution for the mechanized harvesting of crops with short picking windows.
随着农业智能化的发展,以采收周期短、最优采收时间依赖于生长模式为特征的作物机械化采收面临着路径规划和多机协同调度的双重挑战。为了解决丘陵地区红花采收的复杂场景,提出了一种多任务分配与调度机制(MTA-SM),旨在实现多机器的全覆盖采摘路径和时间窗约束调度。该系统主要包括两大模块:在路径规划方面,引入地形适应因子对覆盖路径规划(CPP)算法进行改进;优化收割机转向策略,减少无效运动,提高作业覆盖;在调度方面,建立了带时间窗的车辆路由问题(VRPTW)模型,采用改进的蚁群优化(ACO)算法,采用动态信息素更新机制,实现多台机器之间的协调调度,从而最大限度地减少路径冲突和空闲时间。仿真结果表明,MTA-SM系统不仅优化了单台收割机的作业路径,而且显著提高了多机协作的效率和资源利用率。这为短采收窗作物的机械化采收提供了一种实用、智能的解决方案。
{"title":"MTA-SM: Multi-machine path planning and time-window scheduling joint optimization method in hilly safflower harvesting","authors":"Liwei Yang ,&nbsp;Ping Li ,&nbsp;Zhongshuo Ding ,&nbsp;Chao Sun ,&nbsp;Fei Chen ,&nbsp;Yijiang Zheng ,&nbsp;Yun Ge","doi":"10.1016/j.compag.2026.111471","DOIUrl":"10.1016/j.compag.2026.111471","url":null,"abstract":"<div><div>With the development of agricultural intelligence, mechanized harvesting of crops characterized by short picking periods and growth-pattern-dependent optimal harvest times faces dual challenges in path planning and multi-machine collaborative scheduling. To tackle the complex scenario of safflower harvesting in hilly terrains, this paper proposes a Multi-Task Assignment and Scheduling Mechanism (MTA-SM) aimed at achieving both full-coverage picking paths and time-window-constrained scheduling for multiple machines. The system consists of two major modules: for path planning, a terrain adaptation factor is introduced to improve the Coverage Path Planning (CPP) algorithm, and the turning strategy of harvesters is optimized to reduce ineffective movements and enhance operational coverage. For scheduling, a Vehicle Routing Problem with Time Windows (VRPTW) model is formulated, and an improved Ant Colony Optimization (ACO) algorithm with a dynamic pheromone updating mechanism is employed to realize coordinated scheduling among multiple machines, thereby minimizing path conflicts and idle time. Simulation results indicate that the MTA-SM system not only optimizes the operational path of a single harvester but also significantly enhances the efficiency and resource utilization of multi-machine collaboration. This provides a practical and intelligent solution for the mechanized harvesting of crops with short picking windows.</div></div>","PeriodicalId":50627,"journal":{"name":"Computers and Electronics in Agriculture","volume":"244 ","pages":"Article 111471"},"PeriodicalIF":8.9,"publicationDate":"2026-01-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146079826","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A human-centric framework for enhancing usability in a vineyard digital twin system 一个以人为中心的框架,用于提高葡萄园数字孪生系统的可用性
IF 8.9 1区 农林科学 Q1 AGRICULTURE, MULTIDISCIPLINARY Pub Date : 2026-01-27 DOI: 10.1016/j.compag.2026.111490
Meysam Zareiee , Baixiang Zhao , Claire Palmer , Mahsa Mehrad , Yee Mey Goh , Rebecca Grant , Ella-Mae Hubbard , Jörn Mehnen , Anja Maier
This paper develops and applies a human-centric framework to design a Digital Twin (DT) by applying a people-led approach to a vineyard automation scenario. Current DT systems in agriculture often focus on technical performance, which creates usability challenges such as data overload, lack of role-specific interfaces, and reduced trust among non-technical users. The study applies Personas to represent user groups and introduces a human-centric framework for mapping tasks and decision processes. The framework makes an original contribution by demonstrating how established human-centric methods can be systematically integrated into a coherent DT development process, addressing a recognised methodological gap in the literature. The objective of this research is to evaluate how a structured, human-centric approach can improve usability, cognitive alignment, and stakeholder engagement in vineyard automation. These processes are modeled using Personas, Decision Ladders and Control Task Analysis to align system functionality with user roles and cognitive needs. The research methodology integrates Personas, ConTA, and Decision Ladders within a real-world vineyard case study. This study showcases the impact of applying a structured human-centric DT design framework on improving decision-making support, user engagement, and system efficiency in agricultural contexts. Moreover, it provides expert-informed evidence in what way human-centric methods can be operationalised in a consistent and transparent way for DT redesign. Overall, the work demonstrates how a structured, people-led approach can enhance the usability and adoption of both new and existing DT systems, offering a transferable framework with relevance beyond agriculture.
本文通过将以人为本的方法应用于葡萄园自动化场景,开发并应用了以人为中心的框架来设计数字孪生(DT)。目前农业领域的DT系统通常侧重于技术性能,这造成了可用性挑战,如数据过载、缺乏特定角色的接口以及非技术用户之间的信任降低。该研究应用人物角色来表示用户组,并引入了一个以人为中心的框架来映射任务和决策过程。该框架通过展示如何将已建立的以人为中心的方法系统地集成到连贯的DT开发过程中,解决了文献中公认的方法差距,从而做出了原创性贡献。本研究的目的是评估一个结构化的、以人为中心的方法如何提高葡萄园自动化的可用性、认知一致性和利益相关者的参与。这些过程使用人物角色、决策阶梯和控制任务分析来建模,以使系统功能与用户角色和认知需求保持一致。该研究方法将人物角色、ConTA和决策阶梯整合到一个真实的葡萄园案例研究中。本研究展示了在农业环境中应用结构化的以人为中心的数据挖掘设计框架对改善决策支持、用户参与和系统效率的影响。此外,它还提供了专家知情的证据,说明以人为本的方法可以以一致和透明的方式对DT重新设计进行操作。总体而言,这项工作表明,一种结构化的、以人为本的方法如何能够提高新的和现有的DT系统的可用性和采用,提供一个可转移的框架,其相关性超出农业范畴。
{"title":"A human-centric framework for enhancing usability in a vineyard digital twin system","authors":"Meysam Zareiee ,&nbsp;Baixiang Zhao ,&nbsp;Claire Palmer ,&nbsp;Mahsa Mehrad ,&nbsp;Yee Mey Goh ,&nbsp;Rebecca Grant ,&nbsp;Ella-Mae Hubbard ,&nbsp;Jörn Mehnen ,&nbsp;Anja Maier","doi":"10.1016/j.compag.2026.111490","DOIUrl":"10.1016/j.compag.2026.111490","url":null,"abstract":"<div><div>This paper develops and applies a human-centric framework to design a Digital Twin (DT) by applying a people-led approach to a vineyard automation scenario. Current DT systems in agriculture often focus on technical performance, which creates usability challenges such as data overload, lack of role-specific interfaces, and reduced trust among non-technical users. The study applies Personas to represent user groups and introduces a human-centric framework for mapping tasks and decision processes. The framework makes an original contribution by demonstrating how established human-centric methods can be systematically integrated into a coherent DT development process, addressing a recognised methodological gap in the literature. The objective of this research is to evaluate how a structured, human-centric approach can improve usability, cognitive alignment, and stakeholder engagement in vineyard automation. These processes are modeled using Personas, Decision Ladders and Control Task Analysis to align system functionality with user roles and cognitive needs. The research methodology integrates Personas, ConTA, and Decision Ladders within a real-world vineyard case study. This study showcases the impact of applying a structured human-centric DT design framework on improving decision-making support, user engagement, and system efficiency in agricultural contexts. Moreover, it provides expert-informed evidence in what way human-centric methods can be operationalised in a consistent and transparent way for DT redesign. Overall, the work demonstrates how a structured, people-led approach can enhance the usability and adoption of both new and existing DT systems, offering a transferable framework with relevance beyond agriculture.</div></div>","PeriodicalId":50627,"journal":{"name":"Computers and Electronics in Agriculture","volume":"244 ","pages":"Article 111490"},"PeriodicalIF":8.9,"publicationDate":"2026-01-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146079827","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Dynamic gamma correction-guided CNN for low-light corn tassel enhancement in intelligent detasselling systems 智能脱销系统中弱光玉米穗增强的动态伽玛校正引导CNN
IF 8.9 1区 农林科学 Q1 AGRICULTURE, MULTIDISCIPLINARY Pub Date : 2026-01-23 DOI: 10.1016/j.compag.2026.111436
Qirui Wang , Yang Liu , Shenyao Hu , Yuting Yan , Bing Li , Hanping Mao
The accuracy of intelligent corn detasselling systems is severely compromised by low-light conditions, which degrade image quality and impede tassel recognition. To address the limitations of existing methods, such as noise amplification, detail distortion, and inadequate global illumination modeling, a Low-Light Corn Plant Image Enhancement Model (L2CP-IEM) is proposed. The core of L2CP-IEM is an innovative closed-loop dynamic gamma correction mechanism. This mechanism, guided by the discriminator’s confidence, is embedded within a residual encoder-decoder architecture, enabling adaptive illumination adjustment and stable training. By using a green cardboard calibration method, a high-quality dataset consisting of 950 paired low-light and normal-light corn images was created. Experiments on the LOL-v1 benchmark dataset demonstrate that L2CP-IEM outperforms state-of-the-art methods such as GSAD and CIDNet in terms of the SSIM (0.908) and LPIPS (0.059). Ablation studies further validate the critical roles of residual connections and the dynamic gamma correction mechanism. In practical corn tassel image tests, L2CP-IEM achieves balanced performance in terms of brightness and colour restoration, significantly enhances the reconstruction of natural textures and hierarchical details, and fully restores the confidence of the Mask R-CNN in image segmentation. By synergizing physical principles with data-driven approaches, this method significantly improves the quality of low-light images and the robustness of recognition, thus offering a reliable and efficient solution for agricultural visual automation.
低光条件严重影响了智能玉米脱粒系统的精度,降低了图像质量,阻碍了流苏的识别。针对现有方法存在的噪声放大、细节失真和全局光照建模不足等问题,提出了一种低光照玉米植物图像增强模型(L2CP-IEM)。L2CP-IEM的核心是一种创新的闭环动态伽马校正机制。该机制由鉴别器的置信度引导,嵌入残差编码器-解码器架构中,实现自适应照明调整和稳定训练。采用绿卡纸校准方法,建立了由950幅弱光和常光玉米图像组成的高质量数据集。在llo -v1基准数据集上的实验表明,L2CP-IEM在SSIM(0.908)和LPIPS(0.059)方面优于GSAD和CIDNet等最先进的方法。消融研究进一步验证了残余连接和动态伽马校正机制的关键作用。在实际的玉米流苏图像测试中,L2CP-IEM在亮度和色彩还原方面达到了平衡的性能,显著增强了自然纹理和层次细节的重建,充分恢复了Mask R-CNN在图像分割中的信心。该方法将物理原理与数据驱动方法相结合,显著提高了低照度图像的质量和识别的鲁棒性,为农业视觉自动化提供了可靠、高效的解决方案。
{"title":"Dynamic gamma correction-guided CNN for low-light corn tassel enhancement in intelligent detasselling systems","authors":"Qirui Wang ,&nbsp;Yang Liu ,&nbsp;Shenyao Hu ,&nbsp;Yuting Yan ,&nbsp;Bing Li ,&nbsp;Hanping Mao","doi":"10.1016/j.compag.2026.111436","DOIUrl":"10.1016/j.compag.2026.111436","url":null,"abstract":"<div><div>The accuracy of intelligent corn detasselling systems is severely compromised by low-light conditions, which degrade image quality and impede tassel recognition. To address the limitations of existing methods, such as noise amplification, detail distortion, and inadequate global illumination modeling, a Low-Light Corn Plant Image Enhancement Model (L2CP-IEM) is proposed. The core of L2CP-IEM is an innovative closed-loop dynamic gamma correction mechanism. This mechanism, guided by the discriminator’s confidence, is embedded within a residual encoder-decoder architecture, enabling adaptive illumination adjustment and stable training. By using a green cardboard calibration method, a high-quality dataset consisting of 950 paired low-light and normal-light corn images was created. Experiments on the LOL-v1 benchmark dataset demonstrate that L2CP-IEM outperforms state-of-the-art methods such as GSAD and CIDNet in terms of the SSIM (0.908) and LPIPS (0.059). Ablation studies further validate the critical roles of residual connections and the dynamic gamma correction mechanism. In practical corn tassel image tests, L2CP-IEM achieves balanced performance in terms of brightness and colour restoration, significantly enhances the reconstruction of natural textures and hierarchical details, and fully restores the confidence of the Mask R-CNN in image segmentation. By synergizing physical principles with data-driven approaches, this method significantly improves the quality of low-light images and the robustness of recognition, thus offering a reliable and efficient solution for agricultural visual automation.</div></div>","PeriodicalId":50627,"journal":{"name":"Computers and Electronics in Agriculture","volume":"244 ","pages":"Article 111436"},"PeriodicalIF":8.9,"publicationDate":"2026-01-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146025259","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Frequency-aware deep learning for diarrheal feces and floor fouling monitoring in pig pens 猪圈腹泻粪便和地板污垢监测的频率感知深度学习
IF 8.9 1区 农林科学 Q1 AGRICULTURE, MULTIDISCIPLINARY Pub Date : 2026-01-23 DOI: 10.1016/j.compag.2026.111465
Hao Wang , Yixue Liu , Bin Sun , Juncheng Ma , Chao Liang , Xiao Yang , Renli Qi , Chaoyuan Wang
Floor fouling monitoring in pig facilities is essential for early disease detection and environmental hygiene management, as diarrheal feces indicates digestive disorders while manure accumulation directly impacts animal health and welfare. Current manual inspection methods are labor-intensive and subjective, while existing computer vision approaches suffer from unstable color features under varying lighting conditions and misclassification of background textures as fouling patterns. To address these challenges, we propose FreCANet, a frequency-aware deep learning framework that achieves multi-level fouling classification through hierarchical visual interference suppression. The method integrates three key innovations: Mask R-CNN preprocessing that eliminates pig body occlusion (improving detection recall by up to 17.21%), Frequency Dynamic Convolution that separates manure contamination features from environmental noise across different frequency bands, and Efficient Channel Attention embedded within residual connections for selective feature enhancement. Using a comprehensive dataset of 25,228 images covering seven fouling categories across the complete growth cycle, FreCANet achieved 88.31% accuracy and 0.8679 F1-Score, outperforming ResNet-152 by 2.44% and 2.93% respectively. Diarrheal feces detection reached 95.9% precision on slatted floors and 89.3% recall on solid floors, enabling reliable early warning for digestive health issues. The four-level manure contamination classification achieved 77.2–87.4% precision across fouling gradients from clean to severely soiled conditions. These results demonstrate FreCANet’s effectiveness in transforming subjective manual inspection into quantitative pen hygiene assessment for precision livestock farming applications.
猪舍地板污垢监测对疾病的早期发现和环境卫生管理至关重要,因为腹泻粪便表明消化系统出现问题,而粪便堆积直接影响动物的健康和福利。目前的人工检测方法是劳动密集型和主观的,而现有的计算机视觉方法在不同的光照条件下存在颜色特征不稳定和背景纹理被错误地分类为污垢图案的问题。为了解决这些挑战,我们提出了FreCANet,这是一个频率感知深度学习框架,通过分层视觉干扰抑制实现多层次污垢分类。该方法集成了三个关键创新:消除猪体遮挡的Mask R-CNN预处理(将检测召回率提高17.21%),将粪便污染特征从不同频段的环境噪声中分离出来的频率动态卷积,以及嵌入残差连接中的高效通道关注(Efficient Channel Attention),以增强选择性特征。使用涵盖整个生长周期的7个污垢类别的25,228张图像的综合数据集,FreCANet的准确率达到88.31%,F1-Score为0.8679,分别比ResNet-152高2.44%和2.93%。板条地板的腹泻粪便检测准确率为95.9%,固体地板的召回率为89.3%,为消化系统健康问题提供了可靠的早期预警。4级粪便污染分类在从清洁到严重污染的污染梯度上的精度达到77.2-87.4%。这些结果表明,FreCANet在将主观人工检查转化为精确畜牧业应用的定量猪圈卫生评估方面是有效的。
{"title":"Frequency-aware deep learning for diarrheal feces and floor fouling monitoring in pig pens","authors":"Hao Wang ,&nbsp;Yixue Liu ,&nbsp;Bin Sun ,&nbsp;Juncheng Ma ,&nbsp;Chao Liang ,&nbsp;Xiao Yang ,&nbsp;Renli Qi ,&nbsp;Chaoyuan Wang","doi":"10.1016/j.compag.2026.111465","DOIUrl":"10.1016/j.compag.2026.111465","url":null,"abstract":"<div><div>Floor fouling monitoring in pig facilities is essential for early disease detection and environmental hygiene management, as diarrheal feces indicates digestive disorders while manure accumulation directly impacts animal health and welfare. Current manual inspection methods are labor-intensive and subjective, while existing computer vision approaches suffer from unstable color features under varying lighting conditions and misclassification of background textures as fouling patterns. To address these challenges, we propose FreCANet, a frequency-aware deep learning framework that achieves multi-level fouling classification through hierarchical visual interference suppression. The method integrates three key innovations: Mask R-CNN preprocessing that eliminates pig body occlusion (improving detection recall by up to 17.21%), Frequency Dynamic Convolution that separates manure contamination features from environmental noise across different frequency bands, and Efficient Channel Attention embedded within residual connections for selective feature enhancement. Using a comprehensive dataset of 25,228 images covering seven fouling categories across the complete growth cycle, FreCANet achieved 88.31% accuracy and 0.8679 F1-Score, outperforming ResNet-152 by 2.44% and 2.93% respectively. Diarrheal feces detection reached 95.9% precision on slatted floors and 89.3% recall on solid floors, enabling reliable early warning for digestive health issues. The four-level manure contamination classification achieved 77.2–87.4% precision across fouling gradients from clean to severely soiled conditions. These results demonstrate FreCANet’s effectiveness in transforming subjective manual inspection into quantitative pen hygiene assessment for precision livestock farming applications.</div></div>","PeriodicalId":50627,"journal":{"name":"Computers and Electronics in Agriculture","volume":"244 ","pages":"Article 111465"},"PeriodicalIF":8.9,"publicationDate":"2026-01-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146025153","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Research on a complex start-up control strategy for power-shift tractors based on rule mapping and multi-mode model predictive control 基于规则映射和多模模型预测控制的动力换挡拖拉机复杂启动控制策略研究
IF 8.9 1区 农林科学 Q1 AGRICULTURE, MULTIDISCIPLINARY Pub Date : 2026-01-23 DOI: 10.1016/j.compag.2026.111453
Li-Quan Lu , Ze-Peng Zhang , Guang-Lin Zhang , Hao-Ran Yang , Zhong-Xiang Zhu , Zheng-He Song , Zhi-Qiang Zhai , Jian-Hua Wang , Chuan-Chuan Zhang
To address the significant degradation in start-up comfort, smoothness, and safety of high-power power-shift tractors caused by variations in multiple start-up parameters, rapid vehicle state transitions, and environmental excitations during practical operations, this paper proposes a coordinated control approach consisting of a start-up condition recognition method based on spatial rule mapping and subspace partitioning, and a multi-mode model predictive control (MPC)-based start-up control strategy. First, a start-up dynamic model of the power-shift transmission system is established, and single-factor comparative simulations are conducted in the Matlab/Simulink environment to analyze the influence mechanisms of throttle opening, main/sub gearbox gear selection, road slope, and vehicle initial state on start-up time, clutch friction work, and jerk. Based on these analyses, the multi-source parameters are reduced and normalized into three dimensionless indicators, namely driver start-up intention, tractor initial state, and load state, and the three-dimensional feature space is partitioned into four subspaces according to their impact on start-up performance, enabling real-time start-up condition recognition. A multi-mode MPC controller is then constructed, and a multi-objective genetic algorithm is employed to determine the optimal control parameters for each subspace, achieving an adaptive balance among start-up rapidity, smoothness, and component wear under different operating conditions. The hardware-in-the-loop (HIL) test results indicate that, compared with conventional control methods, the proposed multi-mode MPC exhibits more stable and well-balanced overall performance under different start-up conditions. For example, in a typical flat-road start-up scenario, the maximum jerk is reduced to 46.2 m/s3, while in a slope start-up condition, the reverse travel distance is shortened to 0.178 m. These results demonstrate the effectiveness of the proposed method in improving start-up smoothness and operational safety of tractors under complex start-up conditions, and provide a basis for subsequent real-vehicle experiments and engineering application studies.
针对大功率换向牵引车在实际运行中由于多个启动参数的变化、车辆状态的快速转换以及环境激励等因素导致的启动舒适性、平稳性和安全性显著下降的问题,提出了一种基于空间规则映射和子空间划分的启动条件识别方法。基于多模式模型预测控制(MPC)的启动控制策略。首先,建立动力换挡传动系统启动动力学模型,在Matlab/Simulink环境下进行单因素对比仿真,分析节气门开度、主/副变速箱档位选择、道路坡度、车辆初始状态对启动时间、离合器摩擦功和抽动的影响机理。在此基础上,将多源参数简化归一化为驾驶员启动意图、拖拉机初始状态和负载状态三个无维指标,并根据其对启动性能的影响将三维特征空间划分为4个子空间,实现实时启动状态识别。然后构建多模态MPC控制器,采用多目标遗传算法确定各子空间的最优控制参数,实现不同工况下启动速度、平滑度和部件磨损的自适应平衡。硬件在环(HIL)测试结果表明,与传统控制方法相比,所提出的多模式MPC在不同启动条件下表现出更稳定、更平衡的整体性能。例如,在典型的平坦路面启动工况下,最大加速度减小到46.2 m/s3,而在斜坡启动工况下,反向行驶距离缩短到0.178 m。验证了该方法在提高复杂启动条件下牵引车启动平稳性和运行安全性方面的有效性,为后续的实车试验和工程应用研究提供了基础。
{"title":"Research on a complex start-up control strategy for power-shift tractors based on rule mapping and multi-mode model predictive control","authors":"Li-Quan Lu ,&nbsp;Ze-Peng Zhang ,&nbsp;Guang-Lin Zhang ,&nbsp;Hao-Ran Yang ,&nbsp;Zhong-Xiang Zhu ,&nbsp;Zheng-He Song ,&nbsp;Zhi-Qiang Zhai ,&nbsp;Jian-Hua Wang ,&nbsp;Chuan-Chuan Zhang","doi":"10.1016/j.compag.2026.111453","DOIUrl":"10.1016/j.compag.2026.111453","url":null,"abstract":"<div><div>To address the significant degradation in start-up comfort, smoothness, and safety of high-power power-shift tractors caused by variations in multiple start-up parameters, rapid vehicle state transitions, and environmental excitations during practical operations, this paper proposes a coordinated control approach consisting of a start-up condition recognition method based on spatial rule mapping and subspace partitioning, and a multi-mode model predictive control (MPC)-based start-up control strategy. First, a start-up dynamic model of the power-shift transmission system is established, and single-factor comparative simulations are conducted in the Matlab/Simulink environment to analyze the influence mechanisms of throttle opening, main/sub gearbox gear selection, road slope, and vehicle initial state on start-up time, clutch friction work, and jerk. Based on these analyses, the multi-source parameters are reduced and normalized into three dimensionless indicators, namely driver start-up intention, tractor initial state, and load state, and the three-dimensional feature space is partitioned into four subspaces according to their impact on start-up performance, enabling real-time start-up condition recognition. A multi-mode MPC controller is then constructed, and a multi-objective genetic algorithm is employed to determine the optimal control parameters for each subspace, achieving an adaptive balance among start-up rapidity, smoothness, and component wear under different operating conditions. The hardware-in-the-loop (HIL) test results indicate that, compared with conventional control methods, the proposed multi-mode MPC exhibits more stable and well-balanced overall performance under different start-up conditions. For example, in a typical flat-road start-up scenario, the maximum jerk is reduced to 46.2 m/s<sup>3</sup>, while in a slope start-up condition, the reverse travel distance is shortened to 0.178 m. These results demonstrate the effectiveness of the proposed method in improving start-up smoothness and operational safety of tractors under complex start-up conditions, and provide a basis for subsequent real-vehicle experiments and engineering application studies.</div></div>","PeriodicalId":50627,"journal":{"name":"Computers and Electronics in Agriculture","volume":"244 ","pages":"Article 111453"},"PeriodicalIF":8.9,"publicationDate":"2026-01-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146025343","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Autonomous centerline and point-to-point navigation control method based on multi-sensor fusion in degraded orchards environments 退化果园环境下基于多传感器融合的自主中心线点对点导航控制方法
IF 8.9 1区 农林科学 Q1 AGRICULTURE, MULTIDISCIPLINARY Pub Date : 2026-01-23 DOI: 10.1016/j.compag.2026.111466
Zhenyu Chen , Hanjie Dou , Changyuan Zhai , Zhichong Wang , Yuanyuan Gao , Xiu Wang
Autonomous navigation technology is playing an increasingly vital role in intelligent orchard production and management. However, the sparsely structured and feature-degraded environments of standardized orchards pose significant challenges to existing localization and navigation methods. To address these issues, this study proposes a multi-sensor fusion localization framework that integrates LiDAR, RTK-GNSS, and IMU data to overcome localization failures in degraded environments. Building on this framework, we develop an autonomous navigation system capable of robust tree-row centerline tracking and introduce a point-to-point navigation control method to achieve precise orchard operations. A 3D point cloud map and a 2D occupancy grid map are constructed using the LIO-SAM algorithm combined with a trunk-height-based point cloud extraction method. LiDAR point clouds provide measurement updates for 3D map matching, while tightly coupled RTK and IMU data supply motion estimates. A particle filter fuses these measurements to ensure reliable localization. Evaluation experiments—including map construction accuracy, localization error, navigation precision, and row-center tracking—show that the proposed multi-sensor fusion method reduces localization error by 66.27 % compared with LiDAR-only NDT matching. The row-center tracking error is 4.37 cm and the headland turning error is 20.18 cm, representing reductions of 69.86 % and 48.74 %, respectively, meeting the centerline navigation requirements for spraying. In point-to-point navigation tests, the average longitudinal and lateral errors are 0.225 m and 0.088 m, satisfying the accuracy requirements of harvesting, fertilization, and transport operations. This study provides a comprehensive solution for orchard autonomous navigation and practical techniques for intelligent orchard production in complex field environments.
自主导航技术在果园智能化生产管理中发挥着越来越重要的作用。然而,标准化果园结构稀疏和特征退化的环境对现有的定位和导航方法提出了重大挑战。为了解决这些问题,本研究提出了一个多传感器融合定位框架,该框架集成了LiDAR, RTK-GNSS和IMU数据,以克服退化环境中的定位失败。在此框架的基础上,我们开发了一个自主导航系统,该系统具有强大的树行中心线跟踪能力,并引入了点对点导航控制方法,以实现精确的果园操作。利用LIO-SAM算法结合基于树干高度的点云提取方法,构建了三维点云图和二维占用网格图。LiDAR点云为3D地图匹配提供测量更新,而紧密耦合的RTK和IMU数据提供运动估计。粒子滤波器融合这些测量以确保可靠的定位。包括地图构建精度、定位误差、导航精度和行中心跟踪在内的评估实验表明,与仅激光雷达的无损检测匹配相比,多传感器融合方法的定位误差降低了66.27%。行中心跟踪误差为4.37 cm,海岬转向误差为20.18 cm,分别减小了69.86%和48.74%,满足喷淋中心线导航要求。在点对点导航试验中,纵向和横向平均误差分别为0.225 m和0.088 m,满足采收、施肥和运输作业的精度要求。本研究为果园自主导航提供了综合解决方案,为复杂田间环境下果园智能生产提供了实用技术。
{"title":"Autonomous centerline and point-to-point navigation control method based on multi-sensor fusion in degraded orchards environments","authors":"Zhenyu Chen ,&nbsp;Hanjie Dou ,&nbsp;Changyuan Zhai ,&nbsp;Zhichong Wang ,&nbsp;Yuanyuan Gao ,&nbsp;Xiu Wang","doi":"10.1016/j.compag.2026.111466","DOIUrl":"10.1016/j.compag.2026.111466","url":null,"abstract":"<div><div>Autonomous navigation technology is playing an increasingly vital role in intelligent orchard production and management. However, the sparsely structured and feature-degraded environments of standardized orchards pose significant challenges to existing localization and navigation methods. To address these issues, this study proposes a multi-sensor fusion localization framework that integrates LiDAR, RTK-GNSS, and IMU data to overcome localization failures in degraded environments. Building on this framework, we develop an autonomous navigation system capable of robust tree-row centerline tracking and introduce a point-to-point navigation control method to achieve precise orchard operations. A 3D point cloud map and a 2D occupancy grid map are constructed using the LIO-SAM algorithm combined with a trunk-height-based point cloud extraction method. LiDAR point clouds provide measurement updates for 3D map matching, while tightly coupled RTK and IMU data supply motion estimates. A particle filter fuses these measurements to ensure reliable localization. Evaluation experiments—including map construction accuracy, localization error, navigation precision, and row-center tracking—show that the proposed multi-sensor fusion method reduces localization error by 66.27 % compared with LiDAR-only NDT matching. The row-center tracking error is 4.37 cm and the headland turning error is 20.18 cm, representing reductions of 69.86 % and 48.74 %, respectively, meeting the centerline navigation requirements for spraying. In point-to-point navigation tests, the average longitudinal and lateral errors are 0.225 m and 0.088 m, satisfying the accuracy requirements of harvesting, fertilization, and transport operations. This study provides a comprehensive solution for orchard autonomous navigation and practical techniques for intelligent orchard production in complex field environments.</div></div>","PeriodicalId":50627,"journal":{"name":"Computers and Electronics in Agriculture","volume":"244 ","pages":"Article 111466"},"PeriodicalIF":8.9,"publicationDate":"2026-01-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146025075","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Poultry image synchronization acquisition system based on binocular and thermal infrared cameras 基于双目和热红外摄像机的家禽图像同步采集系统
IF 8.9 1区 农林科学 Q1 AGRICULTURE, MULTIDISCIPLINARY Pub Date : 2026-01-23 DOI: 10.1016/j.compag.2026.111464
Shuai Li , Yajun An , Yuxiao Han , Yingyan Yang , Yuanda Yang , Han Li , Man Zhang
The poultry breeding industry plays a vital role in global agriculture. This study presents a compact and mobile system for synchronized image acquisition in poultry aimed at improving traditional, labor-intensive, and subjective methods for measuring individual chicken temperature through worker-carried and robot-autonomous inspection. The system uses object detection to locate the chicken and combines binocular and thermal infrared cameras to solve the problem of time and position inconsistency to accurately obtain the temperature information of the chicken. It is designed with modules for central control, image acquisition, network transmission, power management, and real-time monitoring. The central control module processes video data and connects with external systems via the network transmission module, while the power management module ensures that all components receive adequate power. The real-time monitoring module supports the display and storage of image data. In recent years, the Robot Operation System (ROS) has greatly improved software development efficiency. By employing ROS for multieye timing synchronization, the system achieves consistent data recording. In a two-day trial at the Deqingyuan Chicken Coop in Beijing, the system collected 2657 pairs of binocular and thermal infrared images, demonstrating real-time capabilities with a maximum time difference tmax of 4.97 ms, a minimum tmin of 0.66 ms, an average tmean of 1.25 ms, and a root mean square error tRMSE of 1.31 ms. Target detection tests indicated that You Only Look Once version 5 (YOLOv5) performed best, with a Precision (P) of 92.71 %, a Recall (R) of 93.91 %, mean Average Precision (mAP) of 96.32 %, and an inference speed of 49.6 ms, showing high-precision and real-time detection. Image registration tests revealed a maximum matching error Hmax of 0.86 pixels, a minimum Hmin of 0.35 pixels, and an average Hmean of 0.61 pixels. The maximum structural similarity index (SSIMmax) was 0.86, the minimum SSIMmin was 0.61, and the average SSIMmean was 0.78. High-precision target detection methods, image time synchronization methods, and spatial registration methods are used to confirm high-precision image registration and accurate target temperature measurement. This system provides a technical solution and equipment support for the quick and accurate acquisition of chicken temperature information in the poultry breeding industry.
家禽养殖业在全球农业中发挥着至关重要的作用。本研究提出了一种紧凑的、可移动的家禽同步图像采集系统,旨在改进传统的、劳动密集型的、主观的方法,通过工人携带和机器人自主检测来测量单个鸡的温度。该系统采用目标检测对鸡进行定位,并结合双目和热红外摄像机解决时间和位置不一致的问题,准确获取鸡的温度信息。它设计了中央控制、图像采集、网络传输、电源管理和实时监控等模块。中控模块负责处理视频数据,并通过网络传输模块与外部系统连接,电源管理模块负责保证各部件都能获得充足的电源。实时监控模块支持图像数据的显示和存储。近年来,机器人操作系统(ROS)极大地提高了软件开发效率。系统采用ROS进行多眼定时同步,实现了数据的一致性记录。在北京德清源鸡舍进行的为期两天的试验中,该系统采集了2657对双目和热红外图像,最大时差tmax为4.97 ms,最小tmin为0.66 ms,平均tms为1.25 ms,均方根误差tRMSE为1.31 ms。目标检测测试结果表明,YOLOv5 (You Only Look Once version 5)表现最佳,精密度(Precision)为92.71%,召回率(Recall)为93.91%,平均平均精密度(mAP)为96.32%,推理速度为49.6 ms,具有较高的检测精度和实时性。图像配准测试显示,最大匹配误差Hmax为0.86像素,最小匹配误差Hmin为0.35像素,平均匹配误差Hmean为0.61像素。最大结构相似指数(SSIMmax)为0.86,最小结构相似指数(SSIMmin)为0.61,平均SSIMmean为0.78。采用高精度目标检测方法、图像时间同步方法和空间配准方法,实现高精度图像配准和精确目标温度测量。该系统为家禽养殖业快速准确地获取鸡体温度信息提供了技术解决方案和设备支持。
{"title":"Poultry image synchronization acquisition system based on binocular and thermal infrared cameras","authors":"Shuai Li ,&nbsp;Yajun An ,&nbsp;Yuxiao Han ,&nbsp;Yingyan Yang ,&nbsp;Yuanda Yang ,&nbsp;Han Li ,&nbsp;Man Zhang","doi":"10.1016/j.compag.2026.111464","DOIUrl":"10.1016/j.compag.2026.111464","url":null,"abstract":"<div><div>The poultry breeding industry plays a vital role in global agriculture. This study presents a compact and mobile system for synchronized image acquisition in poultry aimed at improving traditional, labor-intensive, and subjective methods for measuring individual chicken temperature through worker-carried and robot-autonomous inspection. The system uses object detection to locate the chicken and combines binocular and thermal infrared cameras to solve the problem of time and position inconsistency to accurately obtain the temperature information of the chicken. It is designed with modules for central control, image acquisition, network transmission, power management, and real-time monitoring. The central control module processes video data and connects with external systems via the network transmission module, while the power management module ensures that all components receive adequate power. The real-time monitoring module supports the display and storage of image data. In recent years, the Robot Operation System (ROS) has greatly improved software development efficiency. By employing ROS for multieye timing synchronization, the system achieves consistent data recording. In a two-day trial at the Deqingyuan Chicken Coop in Beijing, the system collected 2657 pairs of binocular and thermal infrared images, demonstrating real-time capabilities with a maximum time difference <em>t<sub>max</sub></em> of 4.97 ms, a minimum <em>t<sub>min</sub></em> of 0.66 ms, an average <em>t<sub>mean</sub></em> of 1.25 ms, and a root mean square error <em>t<sub>RMSE</sub></em> of 1.31 ms. Target detection tests indicated that You Only Look Once version 5 (YOLOv5) performed best, with a Precision (<em>P</em>) of 92.71 %, a Recall (<em>R</em>) of 93.91 %, mean Average Precision (<em>mAP</em>) of 96.32 %, and an inference speed of 49.6 ms, showing high-precision and real-time detection. Image registration tests revealed a maximum matching error <em>H<sub>max</sub></em> of 0.86 pixels, a minimum <em>H<sub>min</sub></em> of 0.35 pixels, and an average <em>H<sub>mean</sub></em> of 0.61 pixels. The maximum structural similarity index (<em>SSIM<sub>max</sub></em>) was 0.86, the minimum <em>SSIM<sub>min</sub></em> was 0.61, and the average <em>SSIM<sub>mean</sub></em> was 0.78. High-precision target detection methods, image time synchronization methods, and spatial registration methods are used to confirm high-precision image registration and accurate target temperature measurement. This system provides a technical solution and equipment support for the quick and accurate acquisition of chicken temperature information in the poultry breeding industry.</div></div>","PeriodicalId":50627,"journal":{"name":"Computers and Electronics in Agriculture","volume":"244 ","pages":"Article 111464"},"PeriodicalIF":8.9,"publicationDate":"2026-01-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146025256","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
AMGAN: A multimodal generative adversarial network for near-daily alfalfa multispectral image reconstruction AMGAN:用于近每日紫花苜蓿多光谱图像重建的多模态生成对抗网络
IF 8.9 1区 农林科学 Q1 AGRICULTURE, MULTIDISCIPLINARY Pub Date : 2026-01-23 DOI: 10.1016/j.compag.2026.111468
Tong Yu , Jiang Chen , Jerome H. Cherney , Zhou Zhang
Accurate and temporally consistent multispectral observations are essential for monitoring alfalfa yield and quality, given its frequent harvest cycles and rapid regrowth. However, optical satellite imagery is often constrained by cloud cover, revisit intervals, and sensor availability. To overcome these limitations, we propose a novel Alfalfa Multimodal Generative Adversarial Network (AMGAN) designed for near-daily multispectral image reconstruction. Unlike conventional image-to-image or spatiotemporal fusion methods that overlook crop-specific characteristics, are restricted to observed timestamps, or depend heavily on dense temporal series, AMGAN leverages multisource (Landsat-8/9, Sentinel-1, PlanetScope) and multimodal (climate, geographic, temporal) information within an adversarial learning paradigm. This enables high-quality image generation from minimal inputs. Extensive experiments across five major alfalfa-producing states in the United States (2022–2024) show that AMGAN consistently surpasses four state-of-the-art (SOTA) deep learning baselines. It achieves higher reconstruction accuracy across all spectral bands, with pronounced gains in red-edge and near-infrared (NIR) regions critical for vegetation assessment. Multisource integration and multimodal cues enhance robustness, ensuring reliable performance under diverse observation scenarios. The reconstructed imagery was subsequently evaluated in alfalfa yield and quality prediction tasks. Results demonstrated high predictive accuracy for dry matter yield (DM) in the cross validation (CV) experiment with a coefficient of determination (R2) of 0.80, and moderate correlations for selected quality traits such as crude protein (CP), non-fiber carbohydrates (NFC), and minerals, while nutritive value traits tied to complex biochemical processes remained more challenging. Overall, this study underscores the potential of multimodal adversarial learning to bridge observational gaps in alfalfa monitoring. The proposed framework provides a scalable, crop-specific approach for generating temporally dense imagery, supporting precision management for biomass-related and proximate quality traits, while performance for digestibility traits remains limited.
鉴于苜蓿收获周期频繁和再生迅速,准确和时间一致的多光谱观测对监测其产量和质量至关重要。然而,光学卫星图像经常受到云层覆盖、重访间隔和传感器可用性的限制。为了克服这些限制,我们提出了一种新的苜蓿多模态生成对抗网络(AMGAN),用于近每日多光谱图像重建。与传统的图像到图像或时空融合方法不同,AMGAN在对抗学习范式中利用多源(Landsat-8/9, Sentinel-1, PlanetScope)和多模态(气候,地理,时间)信息。传统的图像到图像或时空融合方法忽略了作物的特定特征,局限于观测到的时间标记,或严重依赖于密集的时间序列。这样可以从最小的输入生成高质量的图像。在美国五个主要的苜蓿生产州(2022-2024)进行的广泛实验表明,AMGAN始终超过四个最先进的(SOTA)深度学习基线。它在所有光谱波段都实现了更高的重建精度,在对植被评估至关重要的红边和近红外(NIR)区域有明显的提高。多源集成和多模态线索增强了鲁棒性,确保了在不同观测场景下的可靠性能。重建图像随后在紫花苜蓿产量和质量预测任务中进行了评估。结果表明,在交叉验证(CV)实验中,干物质产量(DM)的预测精度较高,决定系数(R2)为0.80,与粗蛋白质(CP)、非纤维碳水化合物(NFC)和矿物质等部分品质性状的相关性中等,而与复杂生化过程相关的营养价值性状仍更具挑战性。总的来说,这项研究强调了多模式对抗性学习在苜蓿监测中弥合观察差距的潜力。提出的框架提供了一种可扩展的、特定于作物的方法来生成时间密集的图像,支持对生物量相关和近似质量性状的精确管理,而消化率性状的性能仍然有限。
{"title":"AMGAN: A multimodal generative adversarial network for near-daily alfalfa multispectral image reconstruction","authors":"Tong Yu ,&nbsp;Jiang Chen ,&nbsp;Jerome H. Cherney ,&nbsp;Zhou Zhang","doi":"10.1016/j.compag.2026.111468","DOIUrl":"10.1016/j.compag.2026.111468","url":null,"abstract":"<div><div>Accurate and temporally consistent multispectral observations are essential for monitoring alfalfa yield and quality, given its frequent harvest cycles and rapid regrowth. However, optical satellite imagery is often constrained by cloud cover, revisit intervals, and sensor availability. To overcome these limitations, we propose a novel Alfalfa Multimodal Generative Adversarial Network (AMGAN) designed for near-daily multispectral image reconstruction. Unlike conventional image-to-image or spatiotemporal fusion methods that overlook crop-specific characteristics, are restricted to observed timestamps, or depend heavily on dense temporal series, AMGAN leverages multisource (Landsat-8/9, Sentinel-1, PlanetScope) and multimodal (climate, geographic, temporal) information within an adversarial learning paradigm. This enables high-quality image generation from minimal inputs. Extensive experiments across five major alfalfa-producing states in the United States (2022–2024) show that AMGAN consistently surpasses four state-of-the-art (SOTA) deep learning baselines. It achieves higher reconstruction accuracy across all spectral bands, with pronounced gains in red-edge and near-infrared (NIR) regions critical for vegetation assessment. Multisource integration and multimodal cues enhance robustness, ensuring reliable performance under diverse observation scenarios. The reconstructed imagery was subsequently evaluated in alfalfa yield and quality prediction tasks. Results demonstrated high predictive accuracy for dry matter yield (DM) in the cross validation (CV) experiment with a coefficient of determination (R<sup>2</sup>) of 0.80, and moderate correlations for selected quality traits such as crude protein (CP), non-fiber carbohydrates (NFC), and minerals, while nutritive value traits tied to complex biochemical processes remained more challenging. Overall, this study underscores the potential of multimodal adversarial learning to bridge observational gaps in alfalfa monitoring. The proposed framework provides a scalable, crop-specific approach for generating temporally dense imagery, supporting precision management for biomass-related and proximate quality traits, while performance for digestibility traits remains limited.</div></div>","PeriodicalId":50627,"journal":{"name":"Computers and Electronics in Agriculture","volume":"244 ","pages":"Article 111468"},"PeriodicalIF":8.9,"publicationDate":"2026-01-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146025260","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Agricultural autonomous decision-making system ”Fuxi Brain” Based on generative large model fusion internet of things 基于生成式大模型融合物联网的农业自主决策系统“伏羲脑”
IF 8.9 1区 农林科学 Q1 AGRICULTURE, MULTIDISCIPLINARY Pub Date : 2026-01-23 DOI: 10.1016/j.compag.2026.111454
Haihua Chen , Guangyu Hou , Chen Hua , Shuo Wang , Ziyu Chen , Yanbin Zhang
Precise decision-making in agricultural production has always been a key bottleneck restricting the development of modern agriculture. Traditional decision-making models that rely on manual experience are no longer able to meet the needs of modern intelligent agriculture. This study, by integrating the Agricultural Internet of Things (IoT) with generative big models, has realized a fully autonomous agricultural intelligent system called the ”Fuxi Brain.” This system consists of two parts. The first part constructs a comprehensive ”sky-air-ground-human-machine” data collection system, enabling digital perception of all factors in agricultural production. The second part is the intelligent decision-making system. First, within the brain’s dynamic decision-making layer, a multi-agent collaborative architecture based on a hybrid multi-model (a general big model + a specialized agricultural model) is proposed. Furthermore, a dynamic optimal matrix algorithm (DOMA) is designed to improve the system’s decision-making efficiency significantly. Finally, a full-modality alignment training method is developed to effectively address the challenge of integrating multi-source heterogeneous data. Experimental results show that, in the AlpacaEva and MT-Bench benchmarks, the system’s decision accuracy improved by 36.7 percentage points compared to mainstream models such as ChatGLM. The full-modal alignment training method significantly outperformed traditional methods in cross-modal understanding tasks. Test results on a one-stop agricultural service decision-making platform demonstrated an accuracy rate of 92.3% compared to those of human experts. In an actual application on a 1367-acre corn planting site at Dahewan Farm in Inner Mongolia, the system autonomously generated 127 decisions throughout the production cycle, achieving an accuracy rate of 89.7%. This successfully enabled autonomous and precise decision-making throughout the entire process, from planting to harvesting. This research provides innovative technical paths and practical examples for the development of intelligent agriculture.
农业生产的精准决策一直是制约现代农业发展的关键瓶颈。传统的依靠人工经验的决策模型已经不能满足现代智能农业的需求。本研究通过将农业物联网(IoT)与生成大模型相结合,实现了一个完全自主的农业智能系统,称为“伏羲脑”。该系统由两部分组成。第一部分构建全面的“天-空-地-人-机”数据采集系统,实现农业生产各要素的数字化感知。第二部分是智能决策系统。首先,在大脑的动态决策层,提出了一种基于混合多模型(通用大模型+专业农业模型)的多智能体协同体系结构;设计了一种动态最优矩阵算法(DOMA),显著提高了系统的决策效率。最后,提出了一种全模态对齐训练方法,有效地解决了多源异构数据集成的难题。实验结果表明,在AlpacaEva和MT-Bench基准测试中,与ChatGLM等主流模型相比,该系统的决策精度提高了36.7个百分点。在跨模态理解任务中,全模态对齐训练方法显著优于传统方法。在一站式农业服务决策平台上的测试结果表明,与人类专家相比,准确率达到92.3%。在内蒙古大河湾农场1367亩玉米种植地的实际应用中,该系统在整个生产周期内自主生成127个决策,准确率达到89.7%。这成功地实现了从种植到收获的整个过程中自主和精确的决策。本研究为智慧农业的发展提供了创新的技术路径和实践范例。
{"title":"Agricultural autonomous decision-making system ”Fuxi Brain” Based on generative large model fusion internet of things","authors":"Haihua Chen ,&nbsp;Guangyu Hou ,&nbsp;Chen Hua ,&nbsp;Shuo Wang ,&nbsp;Ziyu Chen ,&nbsp;Yanbin Zhang","doi":"10.1016/j.compag.2026.111454","DOIUrl":"10.1016/j.compag.2026.111454","url":null,"abstract":"<div><div>Precise decision-making in agricultural production has always been a key bottleneck restricting the development of modern agriculture. Traditional decision-making models that rely on manual experience are no longer able to meet the needs of modern intelligent agriculture. This study, by integrating the Agricultural Internet of Things (IoT) with generative big models, has realized a fully autonomous agricultural intelligent system called the ”Fuxi Brain.” This system consists of two parts. The first part constructs a comprehensive ”sky-air-ground-human-machine” data collection system, enabling digital perception of all factors in agricultural production. The second part is the intelligent decision-making system. First, within the brain’s dynamic decision-making layer, a multi-agent collaborative architecture based on a hybrid multi-model (a general big model + a specialized agricultural model) is proposed. Furthermore, a dynamic optimal matrix algorithm (DOMA) is designed to improve the system’s decision-making efficiency significantly. Finally, a full-modality alignment training method is developed to effectively address the challenge of integrating multi-source heterogeneous data. Experimental results show that, in the AlpacaEva and MT-Bench benchmarks, the system’s decision accuracy improved by 36.7 percentage points compared to mainstream models such as ChatGLM. The full-modal alignment training method significantly outperformed traditional methods in cross-modal understanding tasks. Test results on a one-stop agricultural service decision-making platform demonstrated an accuracy rate of 92.3% compared to those of human experts. In an actual application on a 1367-acre corn planting site at Dahewan Farm in Inner Mongolia, the system autonomously generated 127 decisions throughout the production cycle, achieving an accuracy rate of 89.7%. This successfully enabled autonomous and precise decision-making throughout the entire process, from planting to harvesting. This research provides innovative technical paths and practical examples for the development of intelligent agriculture.</div></div>","PeriodicalId":50627,"journal":{"name":"Computers and Electronics in Agriculture","volume":"244 ","pages":"Article 111454"},"PeriodicalIF":8.9,"publicationDate":"2026-01-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146025257","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
UAV-based estimates of corn LAI using hyperspectral and EnMAP spectral resolutions 利用高光谱和EnMAP光谱分辨率估算基于无人机的玉米LAI
IF 8.9 1区 农林科学 Q1 AGRICULTURE, MULTIDISCIPLINARY Pub Date : 2026-01-23 DOI: 10.1016/j.compag.2026.111469
K. Colton Flynn , H.K. Chinmayi , Gurjinder S. Baath , Bala Ram Sapkota , Chris Delhom , Douglas R. Smith
Accurate estimation of the Leaf Area Index (LAI) is essential for assessing vegetation health and managing agricultural productivity. This study examines the application of Unmanned Aerial Vehicle (UAV)-based hyperspectral imaging and convolved EnMAP spectral data for estimating corn LAI, utilizing machine learning (ML) models to improve prediction accuracy. Various ML models, including k-nearest Neighbors (KNN), Support Vector Machines (SVM), Partial Least Squares Regression (PLS), and Random Forests (RF), were assessed to predict LAI from hyperspectral, EnMAP, and vegetation index features. Results demonstrate that PLS models consistently outperformed other ML approaches, achieving coefficients of determination (R2) ranging from 0.79 to 0.82. Notably, for the top two performing models (PLS and SVM) spectral indices such as NDRE, GNDVI, and NDVI proved more effective for LAI prediction than individual spectral bands. Interestingly, no matter the incorporation of hyperspectral wavelengths or EnMAP bands, the models predicting LAI were comparable. Feature importance analysis reinforced the dominance of vegetation indices as key predictors. The findings emphasize the benefits of high-resolution UAV hyperspectral imaging, convolved satellite spectral data, and machine learning, particularly PLS, for scalable and accurate LAI estimation in agroecosystems.
准确估算叶面积指数(LAI)对于评估植被健康状况和管理农业生产力至关重要。本研究探讨了基于无人机(UAV)的高光谱成像和卷积EnMAP光谱数据在玉米LAI估计中的应用,利用机器学习(ML)模型来提高预测精度。评估了各种ML模型,包括k近邻(KNN)、支持向量机(SVM)、偏最小二乘回归(PLS)和随机森林(RF),以从高光谱、EnMAP和植被指数特征中预测LAI。结果表明,PLS模型始终优于其他ML方法,其决定系数(R2)范围为0.79至0.82。值得注意的是,对于表现最好的两种模型(PLS和SVM),光谱指数如NDRE、GNDVI和NDVI被证明比单个光谱波段更有效地预测LAI。有趣的是,无论是结合高光谱波长还是EnMAP波段,预测LAI的模型都具有可比性。特征重要性分析强化了植被指数作为主要预测因子的优势。研究结果强调了高分辨率无人机高光谱成像、复杂卫星光谱数据和机器学习(特别是PLS)对农业生态系统中可扩展和准确的LAI估计的好处。
{"title":"UAV-based estimates of corn LAI using hyperspectral and EnMAP spectral resolutions","authors":"K. Colton Flynn ,&nbsp;H.K. Chinmayi ,&nbsp;Gurjinder S. Baath ,&nbsp;Bala Ram Sapkota ,&nbsp;Chris Delhom ,&nbsp;Douglas R. Smith","doi":"10.1016/j.compag.2026.111469","DOIUrl":"10.1016/j.compag.2026.111469","url":null,"abstract":"<div><div>Accurate estimation of the Leaf Area Index (LAI) is essential for assessing vegetation health and managing agricultural productivity. This study examines the application of Unmanned Aerial Vehicle (UAV)-based hyperspectral imaging and convolved EnMAP spectral data for estimating corn LAI, utilizing machine learning (ML) models to improve prediction accuracy. Various ML models, including k-nearest Neighbors (KNN), Support Vector Machines (SVM), Partial Least Squares Regression (PLS), and Random Forests (RF), were assessed to predict LAI from hyperspectral, EnMAP, and vegetation index features. Results demonstrate that PLS models consistently outperformed other ML approaches, achieving coefficients of determination (<em>R<sup>2</sup></em>) ranging from 0.79 to 0.82. Notably, for the top two performing models (PLS and SVM) spectral indices such as NDRE, GNDVI, and NDVI proved more effective for LAI prediction than individual spectral bands. Interestingly, no matter the incorporation of hyperspectral wavelengths or EnMAP bands, the models predicting LAI were comparable. Feature importance analysis reinforced the dominance of vegetation indices as key predictors. The findings emphasize the benefits of high-resolution UAV hyperspectral imaging, convolved satellite spectral data, and machine learning, particularly PLS, for scalable and accurate LAI estimation in agroecosystems.</div></div>","PeriodicalId":50627,"journal":{"name":"Computers and Electronics in Agriculture","volume":"244 ","pages":"Article 111469"},"PeriodicalIF":8.9,"publicationDate":"2026-01-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146025258","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Computers and Electronics in Agriculture
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1