首页 > 最新文献

Artificial Intelligence in Agriculture最新文献

英文 中文
Application of artificial intelligence in insect pest identification - A review 人工智能在害虫识别中的应用综述
IF 12.4 Q1 AGRICULTURE, MULTIDISCIPLINARY Pub Date : 2026-03-01 Epub Date: 2025-06-16 DOI: 10.1016/j.aiia.2025.06.005
Sourav Chakrabarty , Chandan Kumar Deb , Sudeep Marwaha , Md. Ashraful Haque , Deeba Kamil , Raju Bheemanahalli , Pathour Rajendra Shashank
The increasing danger of insect pests to agriculture and ecosystems calls for quick, and precise diagnosis. Conventional techniques that depend on human observation and taxonomic knowledge are frequently labour-intensive and time-consuming. Incorporating artificial intelligence (AI) into detection has emerged as an effective approach in agriculture, including entomology. AI-based detection methods use machine learning, deep learning algorithms, and computer vision techniques to automate and improve the identification of insects. Deep learning algorithms, such as convolutional neural networks (CNNs), are primarily used for AI-powered insect pest identification by categorizing insects based on their visual features through image-based classification methodology. These methods have revolutionized insect identification by analyzing large databases of insect images and identifying distinct patterns and features linked to different species. AI-powered systems can improve insect pest identification by utilizing other data modalities. However, there are obstacles to overcome, such as the scarcity of high-quality labelled datasets and scalability and affordability issues. Despite these challenges, there is significant potential for AI-powered insect pest identification and pest management. Cooperation among researchers, practitioners, and policymakers is necessary to utilize AI in pest management fully. AI technology is transforming the field of entomology by enabling high-precision identification of insect pests, leading to more efficient and eco-friendly pest management strategies. This can enhance food safety and reduce the need for continuous insecticide spraying, ensuring the purity and safety of the food supply chains. This review updates AI-powered insect pest identification, covering its significance, methods, challenges, and prospects.
害虫对农业和生态系统的危害日益增加,需要快速和准确的诊断。依靠人类观察和分类学知识的传统技术往往是劳动密集型和耗时的。将人工智能(AI)纳入检测已成为包括昆虫学在内的农业领域的有效方法。基于人工智能的检测方法使用机器学习、深度学习算法和计算机视觉技术来自动化和改进昆虫的识别。深度学习算法,如卷积神经网络(cnn),主要用于人工智能驱动的害虫识别,通过基于图像的分类方法,根据昆虫的视觉特征对昆虫进行分类。这些方法通过分析大型昆虫图像数据库,识别与不同物种相关的不同模式和特征,彻底改变了昆虫鉴定。人工智能驱动的系统可以通过利用其他数据模式来改进害虫识别。然而,还有一些障碍需要克服,例如高质量标记数据集的稀缺性以及可扩展性和可负担性问题。尽管存在这些挑战,人工智能驱动的害虫识别和害虫管理仍有巨大的潜力。研究人员、从业者和政策制定者之间的合作是充分利用人工智能进行有害生物防治的必要条件。人工智能技术正在改变昆虫学领域,实现对害虫的高精度识别,从而实现更高效、更环保的害虫管理策略。这可以提高食品安全,减少连续喷洒杀虫剂的需要,确保食品供应链的纯度和安全性。本文综述了人工智能害虫识别的意义、方法、挑战和前景。
{"title":"Application of artificial intelligence in insect pest identification - A review","authors":"Sourav Chakrabarty ,&nbsp;Chandan Kumar Deb ,&nbsp;Sudeep Marwaha ,&nbsp;Md. Ashraful Haque ,&nbsp;Deeba Kamil ,&nbsp;Raju Bheemanahalli ,&nbsp;Pathour Rajendra Shashank","doi":"10.1016/j.aiia.2025.06.005","DOIUrl":"10.1016/j.aiia.2025.06.005","url":null,"abstract":"<div><div>The increasing danger of insect pests to agriculture and ecosystems calls for quick, and precise diagnosis. Conventional techniques that depend on human observation and taxonomic knowledge are frequently labour-intensive and time-consuming. Incorporating artificial intelligence (AI) into detection has emerged as an effective approach in agriculture, including entomology. AI-based detection methods use machine learning, deep learning algorithms, and computer vision techniques to automate and improve the identification of insects. Deep learning algorithms, such as convolutional neural networks (CNNs), are primarily used for AI-powered insect pest identification by categorizing insects based on their visual features through image-based classification methodology. These methods have revolutionized insect identification by analyzing large databases of insect images and identifying distinct patterns and features linked to different species. AI-powered systems can improve insect pest identification by utilizing other data modalities. However, there are obstacles to overcome, such as the scarcity of high-quality labelled datasets and scalability and affordability issues. Despite these challenges, there is significant potential for AI-powered insect pest identification and pest management. Cooperation among researchers, practitioners, and policymakers is necessary to utilize AI in pest management fully. AI technology is transforming the field of entomology by enabling high-precision identification of insect pests, leading to more efficient and eco-friendly pest management strategies. This can enhance food safety and reduce the need for continuous insecticide spraying, ensuring the purity and safety of the food supply chains. This review updates AI-powered insect pest identification, covering its significance, methods, challenges, and prospects.</div></div>","PeriodicalId":52814,"journal":{"name":"Artificial Intelligence in Agriculture","volume":"16 1","pages":"Pages 44-61"},"PeriodicalIF":12.4,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144773225","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A lightweight model based on knowledge distillation for free-range chickens detection in complex commercial farming environments 复杂商业养殖环境下基于知识蒸馏的散养鸡检测轻量级模型
IF 12.4 Q1 AGRICULTURE, MULTIDISCIPLINARY Pub Date : 2026-03-01 Epub Date: 2025-10-24 DOI: 10.1016/j.aiia.2025.10.010
Xiaoxin Li , Mingrui Cai , Zhen Liu , Chengcheng Yin , Xinjie Tan , Jiangtao Wen , Yuxing Han
Side-view imaging for monitoring free-range chickens on edge devices faces significant challenges due to complex backgrounds, occlusions, and limited computational resources, which particularly affect the performance of lightweight models in terms of their representational capacity and generalization ability. To address these limitations, this study proposes a Lightweight Free-range Chickens Detection Model based on YOLOv8n and knowledge distillation (LCD-YOLOv8n-KD), establishing an optimal balance between detection performance and model efficiency. The YOLOv8n architecture is enhanced by incorporating DualConv, CCFF, PCC2f, and SAHead modules to create LCD-YOLOv8n, significantly reducing model parameters and computational complexity. Further improvement is achieved through knowledge distillation, where a pre-trained large-scale model developed by our team as the teacher network, and LCD-YOLOv8n functioned as the student network, resulting in the LCD-YOLOv8n-KD model. Experimental validation is conducted using a comprehensive dataset comprising 6000 images with 162,864 labeled chicken targets, collected from various side-view angles in commercial farming environments. LCD-YOLOv8n-KD achieves AP50 values of 95.9 %, 90.2 %, 82.7 %, and 69.3 % on the test set and three independent test sets, respectively. Compared to the original YOLOv8n, the proposed model demonstrates a 16.13 % improvement in AP50 while reducing parameters by 47.84 % and GFLOPs by 41.46 %. The proposed model outperforms other state-of-the-art lightweight models in terms of detection efficiency, accuracy, and generalization capability, demonstrating strong potential for practical deployment in real-world free-range chicken farming environments.
由于复杂的背景、遮挡和有限的计算资源,在边缘设备上监测散养鸡的侧视图成像面临着巨大的挑战,这尤其影响了轻量级模型的表现能力和泛化能力。为了解决这些问题,本研究提出了一种基于YOLOv8n和知识蒸馏(LCD-YOLOv8n-KD)的轻型散养鸡检测模型,在检测性能和模型效率之间建立了最佳平衡。YOLOv8n架构通过合并DualConv, CCFF, PCC2f和SAHead模块来增强LCD-YOLOv8n,显着降低了模型参数和计算复杂度。通过知识蒸馏进一步改进,我们团队开发了一个预训练的大规模模型作为教师网络,LCD-YOLOv8n作为学生网络,得到LCD-YOLOv8n- kd模型。实验验证使用了一个综合数据集,该数据集包含6000幅图像,其中包含162,864个标记鸡目标,这些图像是从商业养殖环境的各个侧面角度收集的。LCD-YOLOv8n-KD在测试集和三个独立测试集上的AP50值分别为95.9%、90.2%、82.7%和69.3%。与原始的YOLOv8n相比,该模型的AP50性能提高了16.13%,参数降低了47.84%,GFLOPs降低了41.46%。所提出的模型在检测效率、准确性和泛化能力方面优于其他最先进的轻量级模型,在现实世界的散养鸡养殖环境中显示出强大的实际部署潜力。
{"title":"A lightweight model based on knowledge distillation for free-range chickens detection in complex commercial farming environments","authors":"Xiaoxin Li ,&nbsp;Mingrui Cai ,&nbsp;Zhen Liu ,&nbsp;Chengcheng Yin ,&nbsp;Xinjie Tan ,&nbsp;Jiangtao Wen ,&nbsp;Yuxing Han","doi":"10.1016/j.aiia.2025.10.010","DOIUrl":"10.1016/j.aiia.2025.10.010","url":null,"abstract":"<div><div>Side-view imaging for monitoring free-range chickens on edge devices faces significant challenges due to complex backgrounds, occlusions, and limited computational resources, which particularly affect the performance of lightweight models in terms of their representational capacity and generalization ability. To address these limitations, this study proposes a Lightweight Free-range Chickens Detection Model based on YOLOv8n and knowledge distillation (LCD-YOLOv8n-KD), establishing an optimal balance between detection performance and model efficiency. The YOLOv8n architecture is enhanced by incorporating DualConv, CCFF, PCC2f, and SAHead modules to create LCD-YOLOv8n, significantly reducing model parameters and computational complexity. Further improvement is achieved through knowledge distillation, where a pre-trained large-scale model developed by our team as the teacher network, and LCD-YOLOv8n functioned as the student network, resulting in the LCD-YOLOv8n-KD model. Experimental validation is conducted using a comprehensive dataset comprising 6000 images with 162,864 labeled chicken targets, collected from various side-view angles in commercial farming environments. LCD-YOLOv8n-KD achieves AP<sub>50</sub> values of 95.9 %, 90.2 %, 82.7 %, and 69.3 % on the test set and three independent test sets, respectively. Compared to the original YOLOv8n, the proposed model demonstrates a 16.13 % improvement in AP<sub>50</sub> while reducing parameters by 47.84 % and GFLOPs by 41.46 %. The proposed model outperforms other state-of-the-art lightweight models in terms of detection efficiency, accuracy, and generalization capability, demonstrating strong potential for practical deployment in real-world free-range chicken farming environments.</div></div>","PeriodicalId":52814,"journal":{"name":"Artificial Intelligence in Agriculture","volume":"16 1","pages":"Pages 266-283"},"PeriodicalIF":12.4,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145416107","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A comprehensive review of obstacle avoidance for autonomous agricultural machinery in multi-operational environment 多作业环境下自主农业机械避障研究综述
IF 12.4 Q1 AGRICULTURE, MULTIDISCIPLINARY Pub Date : 2026-03-01 Epub Date: 2025-10-03 DOI: 10.1016/j.aiia.2025.10.001
Zhijian Chen , Jianjun Yin , Sheikh Muhammad Farhan , Lu Liu , Ding Zhang , Maile Zhou , Junhui Cheng
As automation becomes increasingly adopted to mitigate labor shortages and boost productivity, autonomous technologies such as tractors, drones, and robotic devices are being utilized for various tasks that include plowing, seeding, irrigation, fertilization, and harvesting. Successfully navigating these changing agricultural landscapes necessitates advanced sensing, control, and navigation systems that can adapt in real time to guarantee effective and safe operations. This review focuses on obstacle avoidance systems in autonomous farming machinery, highlighting multi-functional capabilities within intricate field settings. It analyzes various sensing technologies, LiDAR, visual cameras, radar, ultrasonic sensors, GPS/GNSS, and inertial measurement units (IMU) for their individual and collective contributions to precise obstacle detection in fluctuating field conditions. The review examines the potential of multi-sensor fusion to enhance detection accuracy and reliability, with a particular emphasizing on achieving seamless obstacle recognition and response. It addresses recent advancements in control and navigation systems, particularly focusing on path-planning algorithms and real-time decision-making. It enables autonomous systems to adjust dynamically across multi-functional agricultural environments. The methodologies used for path planning, including adaptive and learning-based strategies, are discussed for their ability to optimize navigation in complicated field conditions. Real-time decision-making frameworks are similarly evaluated for their capacity to provide prompt, data-driven reactions to changing obstacles, which is critical for maintaining operational efficiency. Moreover, this review discusses environmental and topographical challenges like variable terrain, unpredictable weather, complex crop arrangements, and interference from co-located machinery that hinder obstacle detection and necessitate adaptive, resilient system responses. In addition, the paper emphasizes future research opportunities, highlighting the significance of advancements in multi-sensor fusion, deep learning for perception, adaptive path planning, model-free control strategies, artificial intelligence, and energy-efficient designs. Enhancing obstacle avoidance systems enables autonomous agricultural machinery to transform modern farming by increasing efficiency, precision, and sustainability. The review highlights the potential of these technologies to support global efforts for sustainable agriculture and food security, aligning agricultural innovation with the needs of a swiftly growing population.
随着自动化被越来越多地用于缓解劳动力短缺和提高生产力,拖拉机、无人机和机器人设备等自主技术正被用于各种任务,包括犁地、播种、灌溉、施肥和收获。在这些不断变化的农业景观中成功导航需要先进的传感、控制和导航系统,这些系统可以实时适应,以确保有效和安全的操作。本文重点介绍了自动农业机械中的避障系统,强调了在复杂的现场环境中的多功能功能。它分析了各种传感技术,激光雷达、视觉摄像机、雷达、超声波传感器、GPS/GNSS和惯性测量单元(IMU),以了解它们在波动场地条件下对精确障碍物检测的单独和集体贡献。这篇综述探讨了多传感器融合在提高检测精度和可靠性方面的潜力,特别强调了实现无缝障碍物识别和响应。它讨论了控制和导航系统的最新进展,特别侧重于路径规划算法和实时决策。它使自主系统能够在多功能农业环境中动态调整。用于路径规划的方法,包括自适应和基于学习的策略,讨论了它们在复杂现场条件下优化导航的能力。对实时决策框架的评估同样基于其对不断变化的障碍提供快速、数据驱动的反应的能力,这对于保持运营效率至关重要。此外,本文还讨论了环境和地形挑战,如多变的地形,不可预测的天气,复杂的作物安排,以及阻碍障碍物检测和需要适应性,弹性系统响应的同址机械的干扰。此外,本文还强调了未来的研究机会,强调了多传感器融合、感知深度学习、自适应路径规划、无模型控制策略、人工智能和节能设计等方面进展的重要性。增强避障系统使自主农业机械能够通过提高效率、精度和可持续性来改变现代农业。该评估强调了这些技术在支持可持续农业和粮食安全的全球努力、使农业创新与快速增长的人口的需求保持一致方面的潜力。
{"title":"A comprehensive review of obstacle avoidance for autonomous agricultural machinery in multi-operational environment","authors":"Zhijian Chen ,&nbsp;Jianjun Yin ,&nbsp;Sheikh Muhammad Farhan ,&nbsp;Lu Liu ,&nbsp;Ding Zhang ,&nbsp;Maile Zhou ,&nbsp;Junhui Cheng","doi":"10.1016/j.aiia.2025.10.001","DOIUrl":"10.1016/j.aiia.2025.10.001","url":null,"abstract":"<div><div>As automation becomes increasingly adopted to mitigate labor shortages and boost productivity, autonomous technologies such as tractors, drones, and robotic devices are being utilized for various tasks that include plowing, seeding, irrigation, fertilization, and harvesting. Successfully navigating these changing agricultural landscapes necessitates advanced sensing, control, and navigation systems that can adapt in real time to guarantee effective and safe operations. This review focuses on obstacle avoidance systems in autonomous farming machinery, highlighting multi-functional capabilities within intricate field settings. It analyzes various sensing technologies, LiDAR, visual cameras, radar, ultrasonic sensors, GPS/GNSS, and inertial measurement units (IMU) for their individual and collective contributions to precise obstacle detection in fluctuating field conditions. The review examines the potential of multi-sensor fusion to enhance detection accuracy and reliability, with a particular emphasizing on achieving seamless obstacle recognition and response. It addresses recent advancements in control and navigation systems, particularly focusing on path-planning algorithms and real-time decision-making. It enables autonomous systems to adjust dynamically across multi-functional agricultural environments. The methodologies used for path planning, including adaptive and learning-based strategies, are discussed for their ability to optimize navigation in complicated field conditions. Real-time decision-making frameworks are similarly evaluated for their capacity to provide prompt, data-driven reactions to changing obstacles, which is critical for maintaining operational efficiency. Moreover, this review discusses environmental and topographical challenges like variable terrain, unpredictable weather, complex crop arrangements, and interference from co-located machinery that hinder obstacle detection and necessitate adaptive, resilient system responses. In addition, the paper emphasizes future research opportunities, highlighting the significance of advancements in multi-sensor fusion, deep learning for perception, adaptive path planning, model-free control strategies, artificial intelligence, and energy-efficient designs. Enhancing obstacle avoidance systems enables autonomous agricultural machinery to transform modern farming by increasing efficiency, precision, and sustainability. The review highlights the potential of these technologies to support global efforts for sustainable agriculture and food security, aligning agricultural innovation with the needs of a swiftly growing population.</div></div>","PeriodicalId":52814,"journal":{"name":"Artificial Intelligence in Agriculture","volume":"16 1","pages":"Pages 139-163"},"PeriodicalIF":12.4,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145264762","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Application of navigation technology in agricultural machinery: A review and prospects 导航技术在农业机械中的应用综述与展望
IF 12.4 Q1 AGRICULTURE, MULTIDISCIPLINARY Pub Date : 2026-03-01 Epub Date: 2025-10-02 DOI: 10.1016/j.aiia.2025.10.003
Liuyan Feng , Changsu Xu , Han Tang , Zhongcai Wei , Xiaodong Guan , Jingcheng Xu , Mingjin Yang , Yunwu Li
With the rapid advancement of information technology, the intelligent and unmanned applications of agricultural machinery and equipment have become a central focus of current research. Navigation technology is central to achieving autonomous driving in agricultural machinery and plays a key role in advancing intelligent agriculture. However, although some studies have reviewed aspects of agricultural machinery navigation technologies, a comprehensive and systematic overview that clearly outlines the developmental trajectory of these technologies is still lacking. At the same time, there is an urgent need to break through traditional navigation frameworks to address the challenges posed by complex agricultural environments. Addressing this gap, this study provides a comprehensive overview of the evolution of navigation technologies in agricultural machinery, categorizing them into three stages: assisted navigation, autonomous navigation, and intelligent navigation, based on the level of autonomy in agricultural machinery. Special emphasis is placed on the brain-inspired navigation technology, which is an important branch of intelligent navigation and has attracted widespread attention as an emerging direction. It innovatively mimics the cognitive and learning abilities of the brain, demonstrating high adaptability and robustness to better handle uncertainty and complex environments. Importantly, this paper innovatively explores six potential applications of brain-inspired navigation technology in the agricultural field, highlighting its significant potential to enhance the intelligence of agricultural machinery. The review concludes by discussing current limitations and future research directions. The findings of this study aim to guide the development of more intelligent, adaptive, and resilient navigation systems, accelerating the transformation toward fully autonomous agricultural operations.
随着信息技术的飞速发展,农业机械装备的智能化、无人化应用已成为当前研究的热点。导航技术是实现农业机械自动驾驶的核心,在推进智能农业中发挥着关键作用。然而,尽管一些研究已经回顾了农机导航技术的各个方面,但仍然缺乏一个全面、系统的概述,清晰地勾勒出这些技术的发展轨迹。同时,迫切需要突破传统导航框架,应对复杂农业环境带来的挑战。为了解决这一问题,本研究全面概述了农业机械导航技术的发展历程,并根据农业机械的自主水平将其分为辅助导航、自主导航和智能导航三个阶段。脑启发导航技术是智能导航的一个重要分支,作为一个新兴方向受到了广泛关注。它创新性地模仿了大脑的认知和学习能力,展示了高适应性和鲁棒性,以更好地处理不确定性和复杂的环境。重要的是,本文创新性地探讨了脑控导航技术在农业领域的六种潜在应用,突出了其在提高农业机械智能化方面的巨大潜力。最后讨论了目前的局限性和未来的研究方向。本研究的结果旨在指导更智能、适应性和弹性的导航系统的发展,加速向完全自主农业经营的转变。
{"title":"Application of navigation technology in agricultural machinery: A review and prospects","authors":"Liuyan Feng ,&nbsp;Changsu Xu ,&nbsp;Han Tang ,&nbsp;Zhongcai Wei ,&nbsp;Xiaodong Guan ,&nbsp;Jingcheng Xu ,&nbsp;Mingjin Yang ,&nbsp;Yunwu Li","doi":"10.1016/j.aiia.2025.10.003","DOIUrl":"10.1016/j.aiia.2025.10.003","url":null,"abstract":"<div><div>With the rapid advancement of information technology, the intelligent and unmanned applications of agricultural machinery and equipment have become a central focus of current research. Navigation technology is central to achieving autonomous driving in agricultural machinery and plays a key role in advancing intelligent agriculture. However, although some studies have reviewed aspects of agricultural machinery navigation technologies, a comprehensive and systematic overview that clearly outlines the developmental trajectory of these technologies is still lacking. At the same time, there is an urgent need to break through traditional navigation frameworks to address the challenges posed by complex agricultural environments. Addressing this gap, this study provides a comprehensive overview of the evolution of navigation technologies in agricultural machinery, categorizing them into three stages: assisted navigation, autonomous navigation, and intelligent navigation, based on the level of autonomy in agricultural machinery. Special emphasis is placed on the brain-inspired navigation technology, which is an important branch of intelligent navigation and has attracted widespread attention as an emerging direction. It innovatively mimics the cognitive and learning abilities of the brain, demonstrating high adaptability and robustness to better handle uncertainty and complex environments. Importantly, this paper innovatively explores six potential applications of brain-inspired navigation technology in the agricultural field, highlighting its significant potential to enhance the intelligence of agricultural machinery. The review concludes by discussing current limitations and future research directions. The findings of this study aim to guide the development of more intelligent, adaptive, and resilient navigation systems, accelerating the transformation toward fully autonomous agricultural operations.</div></div>","PeriodicalId":52814,"journal":{"name":"Artificial Intelligence in Agriculture","volume":"16 1","pages":"Pages 94-123"},"PeriodicalIF":12.4,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145264764","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Prediction of wheat stem biomass using a new unified model driven by phenological variable under remote-sensed canopy vegetation index constraints 遥感冠层植被指数约束下物候变量驱动的小麦茎生物量统一预测模型
IF 12.4 Q1 AGRICULTURE, MULTIDISCIPLINARY Pub Date : 2026-03-01 Epub Date: 2025-11-22 DOI: 10.1016/j.aiia.2025.11.007
Weinan Chen , Guijun Yang , Yang Meng , Haikuan Feng , Hongrui Wen , Aohua Tang , Jing Zhang , Hao Yang , Heli Li , Xingang Xu , Changchun Li , Zhenhong Li
Timely and accurate prediction of stem dry biomass (SDB) is crucial for monitoring crop growing status. However, conventional biomass estimation models are often limited by the influence of crop growth phase, which significantly restricts their temporal and spatial transferability. This study aimed to develop a semi-mechanistic stem biomass prediction model (PVWheat-SDB) using phenological variable (PV) to accurately predict winter wheat SDB across different growth stages. The core of the model is to predict SDB using PV under remote-sensed canopy vegetation indices (VIs) constraint. The results demonstrated that VIs can quantify the variations in stem growth equations under different planting conditions and varieties. The developed a PVWheat-SDB model using normalized difference red edge (NDRE) and accumulated growing degree days (AGDD) performed well for SDB predictions, with R2, RMSE, nRMSE and MAE values of 0.88, 75.48 g/m2, 8.04 % and 55.36 g/m2 for the validation datasets of field spectral reflectance, and 0.82, 81.76 g/m2, 11.22 % and 62.82 g/m2 when transferred to unmanned aerial vehicle (UAV) hyperspectral images. Furthermore, the model can not only estimate SDB at the current growth stage, but also predict SDB of subsequent phenological stages. The growth stage stacking strategy indicated that the model accuracy improves significantly as more growth stages are incorporated, especially during the reproductive stages. These results all highlight the robustness and transferability of the PVWheat-SDB model in accurately predicting SDB across different growing seasons and growth stages.
及时准确地预测茎干生物量对监测作物生长状况至关重要。然而,传统的生物量估算模型往往受到作物生长阶段的影响,这严重限制了其时空可移植性。本研究旨在利用物候变量(PV)建立半机械性的冬小麦茎生物量预测模型(PVWheat-SDB),以准确预测冬小麦不同生育期的茎生物量。该模型的核心是在遥感冠层植被指数(VIs)约束下利用PV预测深发展。结果表明,VIs可以量化不同种植条件和品种下茎秆生长方程的变化。利用归一化差分红边(NDRE)和累积生长度数(AGDD)建立pv小麦-SDB模型,对田间光谱反射率验证数据集的R2、RMSE、nRMSE和MAE值分别为0.88、75.48 g/m2、8.04%和55.36 g/m2,对无人机高光谱图像的预测值分别为0.82、81.76 g/m2、11.22%和62.82 g/m2。此外,该模型不仅可以预测当前生长阶段的深发展,还可以预测后续物候阶段的深发展。生长期叠加策略表明,随着生长期的增加,模型精度显著提高,尤其是在繁殖期。这些结果都突出了PVWheat-SDB模型在准确预测不同生长季节和生长阶段SDB方面的稳健性和可移植性。
{"title":"Prediction of wheat stem biomass using a new unified model driven by phenological variable under remote-sensed canopy vegetation index constraints","authors":"Weinan Chen ,&nbsp;Guijun Yang ,&nbsp;Yang Meng ,&nbsp;Haikuan Feng ,&nbsp;Hongrui Wen ,&nbsp;Aohua Tang ,&nbsp;Jing Zhang ,&nbsp;Hao Yang ,&nbsp;Heli Li ,&nbsp;Xingang Xu ,&nbsp;Changchun Li ,&nbsp;Zhenhong Li","doi":"10.1016/j.aiia.2025.11.007","DOIUrl":"10.1016/j.aiia.2025.11.007","url":null,"abstract":"<div><div>Timely and accurate prediction of stem dry biomass (SDB) is crucial for monitoring crop growing status. However, conventional biomass estimation models are often limited by the influence of crop growth phase, which significantly restricts their temporal and spatial transferability. This study aimed to develop a semi-mechanistic stem biomass prediction model (PVWheat-SDB) using phenological variable (PV) to accurately predict winter wheat SDB across different growth stages. The core of the model is to predict SDB using PV under remote-sensed canopy vegetation indices (VIs) constraint. The results demonstrated that VIs can quantify the variations in stem growth equations under different planting conditions and varieties. The developed a PVWheat-SDB model using normalized difference red edge (NDRE) and accumulated growing degree days (AGDD) performed well for SDB predictions, with R<sup>2</sup>, RMSE, nRMSE and MAE values of 0.88, 75.48 g/m<sup>2</sup>, 8.04 % and 55.36 g/m<sup>2</sup> for the validation datasets of field spectral reflectance, and 0.82, 81.76 g/m<sup>2</sup>, 11.22 % and 62.82 g/m<sup>2</sup> when transferred to unmanned aerial vehicle (UAV) hyperspectral images. Furthermore, the model can not only estimate SDB at the current growth stage, but also predict SDB of subsequent phenological stages. The growth stage stacking strategy indicated that the model accuracy improves significantly as more growth stages are incorporated, especially during the reproductive stages. These results all highlight the robustness and transferability of the PVWheat-SDB model in accurately predicting SDB across different growing seasons and growth stages.</div></div>","PeriodicalId":52814,"journal":{"name":"Artificial Intelligence in Agriculture","volume":"16 1","pages":"Pages 658-671"},"PeriodicalIF":12.4,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145924151","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Utilizing interpretable machine learning algorithms and multiple features from multi-temporal Sentinel-2 imagery for predicting wheat fusarium head blight 利用可解释机器学习算法和多时相Sentinel-2图像的多种特征预测小麦赤霉病
IF 12.4 Q1 AGRICULTURE, MULTIDISCIPLINARY Pub Date : 2026-03-01 Epub Date: 2025-10-27 DOI: 10.1016/j.aiia.2025.10.012
Hui Wang , Chao Ruan , Jinling Zhao , Yunran Wang , Ying Li , Yingying Dong , Linsheng Huang
Wheat Fusarium head blight (FHB) severely affects wheat yields, and predicting its occurrence and spatial distribution is essential for safeguarding crop production. This study presents an interpretable machine learning method designed to predict FHB by leveraging multi-temporal and multi-feature information obtained from Sentinel-2 imagery. During the regreening and grain-filling stages, we extracted vegetation indices (VIs), texture features (TFs), and color indices (CIs). Single-temporal features were derived from the grain-filling stage, while multi-temporal features combined data from grain-filling and regreening stages. The synthetic minority over-sampling technique (SMOTE) was employed to adjust the class imbalance, while the most significant characteristics were found using the sequential forward selection (SFS) approach. The extreme gradient boosting (XGBoost) model, optimized using simulated annealing (SA) algorithm and explained via SHapley Additive exPlanation (SHAP) method, integrated VIs, TFs, and CIs as input features. The presented model demonstrated exceptional results, achieving a prediction accuracy of 89.9 % across multi-temporal and a Kappa coefficient of 0.797. It outperformed random forests (RF), backpropagation neural networks (BPNN), and support vector machines (SVM) model. This study indicates that an interpretable machine learning approach, which utilizes both multi-temporal and multi-feature data, is effective in forecasting FHB, thereby providing a valuable tool for agricultural management and disease prevention strategies.
小麦赤霉病(FHB)严重影响小麦产量,预测其发生和空间分布对保障作物生产至关重要。本研究提出了一种可解释的机器学习方法,旨在利用从Sentinel-2图像中获得的多时相和多特征信息来预测FHB。在复绿和灌浆阶段,提取植被指数(VIs)、纹理特征(tf)和颜色指数(CIs)。单时间特征来源于籽粒灌浆阶段,多时间特征来源于籽粒灌浆和返青阶段。采用合成少数过采样技术(SMOTE)来调整类不平衡,而采用顺序正向选择(SFS)方法发现最显著的特征。极端梯度增强(XGBoost)模型采用模拟退火(SA)算法进行优化,并通过SHapley加性解释(SHAP)方法进行解释,将VIs、tf和CIs作为输入特征。该模型在多时段的预测精度为89.9%,Kappa系数为0.797。它优于随机森林(RF)、反向传播神经网络(BPNN)和支持向量机(SVM)模型。该研究表明,利用多时相和多特征数据的可解释机器学习方法可以有效地预测FHB,从而为农业管理和疾病预防策略提供有价值的工具。
{"title":"Utilizing interpretable machine learning algorithms and multiple features from multi-temporal Sentinel-2 imagery for predicting wheat fusarium head blight","authors":"Hui Wang ,&nbsp;Chao Ruan ,&nbsp;Jinling Zhao ,&nbsp;Yunran Wang ,&nbsp;Ying Li ,&nbsp;Yingying Dong ,&nbsp;Linsheng Huang","doi":"10.1016/j.aiia.2025.10.012","DOIUrl":"10.1016/j.aiia.2025.10.012","url":null,"abstract":"<div><div>Wheat <em>Fusarium</em> head blight (FHB) severely affects wheat yields, and predicting its occurrence and spatial distribution is essential for safeguarding crop production. This study presents an interpretable machine learning method designed to predict FHB by leveraging multi-temporal and multi-feature information obtained from Sentinel-2 imagery. During the regreening and grain-filling stages, we extracted vegetation indices (VIs), texture features (TFs), and color indices (CIs). Single-temporal features were derived from the grain-filling stage, while multi-temporal features combined data from grain-filling and regreening stages. The synthetic minority over-sampling technique (SMOTE) was employed to adjust the class imbalance, while the most significant characteristics were found using the sequential forward selection (SFS) approach. The extreme gradient boosting (XGBoost) model, optimized using simulated annealing (SA) algorithm and explained via SHapley Additive exPlanation (SHAP) method, integrated VIs, TFs, and CIs as input features. The presented model demonstrated exceptional results, achieving a prediction accuracy of 89.9 % across multi-temporal and a Kappa coefficient of 0.797. It outperformed random forests (RF), backpropagation neural networks (BPNN), and support vector machines (SVM) model. This study indicates that an interpretable machine learning approach, which utilizes both multi-temporal and multi-feature data, is effective in forecasting FHB, thereby providing a valuable tool for agricultural management and disease prevention strategies.</div></div>","PeriodicalId":52814,"journal":{"name":"Artificial Intelligence in Agriculture","volume":"16 1","pages":"Pages 224-239"},"PeriodicalIF":12.4,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145416196","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Integrating 3D detection networks and dynamic temporal phenotyping for wheat yield classification and prediction 结合三维检测网络和动态时间表型进行小麦产量分类和预测
IF 12.4 Q1 AGRICULTURE, MULTIDISCIPLINARY Pub Date : 2026-03-01 Epub Date: 2025-12-06 DOI: 10.1016/j.aiia.2025.12.001
Honghao Zhou , Bingxi Qin , Qing Li , Wenlong Su , Shaowei Liang , Haijiang Min , Jingrong Zang , Shichao Jin , Dong Jiang , Jiawei Chen
Automated phenotyping of wheat growth stages from 3D point clouds is still limited. The study presents a concise framework that reconstructs multi-view UAS imagery into 3D point clouds (jointing to maturity) and performs plot-level phenotyping. A novel 3D wheat plot detection network—integrating spatial–channel coordinated attention and area attention modules—improves depth-direction feature recognition, and a point-cloud-density-based row segmentation algorithm enables planting-row-scale plot delineation. A supporting software system facilitates 3D visualization and automated extraction of phenotypic parameters. We introduce a dynamic phenotypic index of five temporal metrics (growth stage, slow growth stage, height/area reduction stage, maximum height/area difference stage, and height/area change rate) for growth-stage classification and yield prediction using static and time-series models. Experiments show strong agreement between predicted and measured plot heights (R2 = 0.937); the detection net achieved AP3D = 94.15 % and APBEV = 95.35 % in “easy” mode; and a Bi-LSTM incorporating dynamic traits reached 82.37 % prediction accuracy for leaf area and yield, a 6.14 % improvement over static-trait models. This workflow supports high-throughput 3D phenotyping and reliable yield estimation for precision agriculture.
小麦生长阶段的三维点云自动表型分析仍然有限。该研究提出了一个简洁的框架,将多视图UAS图像重建为3D点云(连接到成熟度),并执行情节水平表型。基于空间通道协调注意和区域注意模块的小麦三维地块检测网络改进了深度-方向特征识别,基于点云密度的行分割算法实现了种植行尺度的地块划分。一个支持的软件系统促进了表型参数的3D可视化和自动提取。采用静态和时间序列模型,引入生长期、慢生长期、高/面积减少期、最大高/面积差期和高/面积变化率五个时间指标的动态表型指数,用于生长期分类和产量预测。实验结果表明,预测值与实测值吻合较好(R2 = 0.937);“easy”模式下检测网AP3D = 94.15%, APBEV = 95.35%;结合动态性状的Bi-LSTM对叶面积和产量的预测准确率达到82.37%,比静态性状模型提高6.14%。该工作流程支持高通量3D表型和可靠的精准农业产量估计。
{"title":"Integrating 3D detection networks and dynamic temporal phenotyping for wheat yield classification and prediction","authors":"Honghao Zhou ,&nbsp;Bingxi Qin ,&nbsp;Qing Li ,&nbsp;Wenlong Su ,&nbsp;Shaowei Liang ,&nbsp;Haijiang Min ,&nbsp;Jingrong Zang ,&nbsp;Shichao Jin ,&nbsp;Dong Jiang ,&nbsp;Jiawei Chen","doi":"10.1016/j.aiia.2025.12.001","DOIUrl":"10.1016/j.aiia.2025.12.001","url":null,"abstract":"<div><div>Automated phenotyping of wheat growth stages from 3D point clouds is still limited. The study presents a concise framework that reconstructs multi-view UAS imagery into 3D point clouds (jointing to maturity) and performs plot-level phenotyping. A novel 3D wheat plot detection network—integrating spatial–channel coordinated attention and area attention modules—improves depth-direction feature recognition, and a point-cloud-density-based row segmentation algorithm enables planting-row-scale plot delineation. A supporting software system facilitates 3D visualization and automated extraction of phenotypic parameters. We introduce a dynamic phenotypic index of five temporal metrics (growth stage, slow growth stage, height/area reduction stage, maximum height/area difference stage, and height/area change rate) for growth-stage classification and yield prediction using static and time-series models. Experiments show strong agreement between predicted and measured plot heights (R<sup>2</sup> = 0.937); the detection net achieved AP<sub>3D</sub> = 94.15 % and AP<sub>BEV</sub> = 95.35 % in “easy” mode; and a Bi-LSTM incorporating dynamic traits reached 82.37 % prediction accuracy for leaf area and yield, a 6.14 % improvement over static-trait models. This workflow supports high-throughput 3D phenotyping and reliable yield estimation for precision agriculture.</div></div>","PeriodicalId":52814,"journal":{"name":"Artificial Intelligence in Agriculture","volume":"16 1","pages":"Pages 603-618"},"PeriodicalIF":12.4,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145747432","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
PlaneSegNet: A deep learning network with plane attention for plant point cloud segmentation in agricultural environments PlaneSegNet:一种具有平面关注的深度学习网络,用于农业环境下植物点云分割
IF 12.4 Q1 AGRICULTURE, MULTIDISCIPLINARY Pub Date : 2026-03-01 Epub Date: 2025-10-30 DOI: 10.1016/j.aiia.2025.10.015
Xin Yang , Chenyi Xu , Yan Wang, Ruixia Feng, Jinshi Yu, Zichen Su, Teng Miao, Tongyu Xu
Accurately extracting plant point clouds from complex agricultural environments is essential for high-throughput phenotyping in smart farming. However, existing methods face significant challenges when processing large-scale agricultural point clouds owing to high noise levels, dense spatial distribution, and blurred structural boundaries between plant and non-plant regions. To address these issues, this study proposes PlaneSegNet, a voxel-based semantic segmentation network that incorporates an innovative plane attention module. This module aggregates projection features from the XZ and YZ planes, enhancing the model's ability to detect vertical geometric variations and thereby improving segmentation performance in boundary regions. Extensive experiments across representative agricultural scenarios at multiple scales, including open-field populations, greenhouse cultivation environments, and large-scale rural landscapes, demonstrate that PlaneSegNet significantly outperforms traditional geometry-based approaches and deep-learning models in plant and non-plant separation. By directly generating high-quality plant-only point clouds, PlaneSegNet significantly reduces reliance on manual pre-processing, offering a practical and generalisable solution for automated plant extraction across a wide range of agricultural applications. The dataset and source code used in this study are publicly available at https://github.com/yangxin6/PlaneSegNet.
从复杂的农业环境中准确提取植物点云对于智能农业的高通量表型分析至关重要。然而,现有的方法在处理大规模农业点云时面临着巨大的挑战,因为噪声水平高,空间分布密集,植物和非植物区域之间的结构边界模糊。为了解决这些问题,本研究提出了PlaneSegNet,这是一个基于体素的语义分割网络,包含了一个创新的平面注意力模块。该模块聚合了XZ和YZ平面的投影特征,增强了模型检测垂直几何变化的能力,从而提高了边界区域的分割性能。在多个尺度的代表性农业场景中进行的广泛实验,包括露天人群、温室栽培环境和大规模农村景观,表明PlaneSegNet在植物和非植物分离方面明显优于传统的基于几何的方法和深度学习模型。PlaneSegNet通过直接生成高质量的植物点云,大大减少了对人工预处理的依赖,为广泛的农业应用中的自动化植物提取提供了实用和通用的解决方案。本研究使用的数据集和源代码可在https://github.com/yangxin6/PlaneSegNet上公开获取。
{"title":"PlaneSegNet: A deep learning network with plane attention for plant point cloud segmentation in agricultural environments","authors":"Xin Yang ,&nbsp;Chenyi Xu ,&nbsp;Yan Wang,&nbsp;Ruixia Feng,&nbsp;Jinshi Yu,&nbsp;Zichen Su,&nbsp;Teng Miao,&nbsp;Tongyu Xu","doi":"10.1016/j.aiia.2025.10.015","DOIUrl":"10.1016/j.aiia.2025.10.015","url":null,"abstract":"<div><div>Accurately extracting plant point clouds from complex agricultural environments is essential for high-throughput phenotyping in smart farming. However, existing methods face significant challenges when processing large-scale agricultural point clouds owing to high noise levels, dense spatial distribution, and blurred structural boundaries between plant and non-plant regions. To address these issues, this study proposes PlaneSegNet, a voxel-based semantic segmentation network that incorporates an innovative plane attention module. This module aggregates projection features from the XZ and YZ planes, enhancing the model's ability to detect vertical geometric variations and thereby improving segmentation performance in boundary regions. Extensive experiments across representative agricultural scenarios at multiple scales, including open-field populations, greenhouse cultivation environments, and large-scale rural landscapes, demonstrate that PlaneSegNet significantly outperforms traditional geometry-based approaches and deep-learning models in plant and non-plant separation. By directly generating high-quality plant-only point clouds, PlaneSegNet significantly reduces reliance on manual pre-processing, offering a practical and generalisable solution for automated plant extraction across a wide range of agricultural applications. The dataset and source code used in this study are publicly available at <span><span>https://github.com/yangxin6/PlaneSegNet</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":52814,"journal":{"name":"Artificial Intelligence in Agriculture","volume":"16 1","pages":"Pages 284-299"},"PeriodicalIF":12.4,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145465552","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Maize phenological stage recognition via coordinated UAV and UGV multi-view sensing and deep learning 基于无人机和UGV协同多视角感知和深度学习的玉米物候期识别
IF 12.4 Q1 AGRICULTURE, MULTIDISCIPLINARY Pub Date : 2026-03-01 Epub Date: 2025-12-30 DOI: 10.1016/j.aiia.2025.12.004
Jibo Yue , Haikuan Feng , Yiguang Fan , Yang Liu , Chunjiang Zhao , Guijun Yang
Crop phenological stages, marked by key events such as germination, leaf emergence, flowering, and senescence, are critical indicators of crop development. Accurate, dynamic monitoring of these stages is essential for crop breeding management. This study introduces a novel multi-view sensing strategy based on coordinated unmanned aerial vehicles (UAVs) and unmanned ground vehicles (UGVs), designed to capture diverse canopy perspectives for phenological stage recognition in maize. Our approach integrates multiple data streams from top-down and internal-horizontal views, acquired via UAV and UGV platforms, and consists of three main components: (i) Acquisition of maize canopy height data, top-of-canopy (TOC) digital images, canopy multispectral images, and inside-of-canopy (IOC) digital images using a UAV- and UGV-based multi-view system; (ii) Development of a multi-modal deep learning framework, MSRNet (maize-phenological stages recognition network), which fuses physiological features from the UAV and UGV sensor modalities, including canopy height, vegetation indices, TOC maize leaf images, and IOC maize cob images; (iii) Comparative evaluation of MSRNet against conventional machine learning and deep learning models. Across 12 phenological stages (V2–R6), MSRNet achieved 84.5 % overall accuracy, outperforming conventional machine learning and single-modality deep learning benchmarks by 3.8–13.6 %. Grad-CAM visualizations confirmed dynamic, stage-specific attention, with the network automatically shifting focus from TOC leaves during vegetative growth to IOC reproductive organs during grain filling. This integrated UAV and UGV strategy, coupled with the dynamic feature selection capability of MSRNet, provides a comprehensive, interpretable workflow for high-throughput maize phenotyping and precision breeding.
作物物候阶段是作物发育的关键指标,以发芽、出芽、开花和衰老等关键事件为标志。准确、动态地监测这些阶段对作物育种管理至关重要。本研究提出了一种基于无人机和地面无人机的多视角玉米物候期识别方法。该方法集成了通过无人机和UGV平台获取的自上而下和内部水平视图的多个数据流,并由三个主要部分组成:(i)使用基于无人机和UGV的多视图系统获取玉米冠层高度数据、冠层顶部(TOC)数字图像、冠层多光谱图像和冠层内部(IOC)数字图像;(ii)开发多模态深度学习框架MSRNet(玉米物候阶段识别网络),该框架融合了无人机和UGV传感器模式的生理特征,包括冠层高度、植被指数、TOC玉米叶片图像和IOC玉米穗轴图像;(iii) MSRNet与传统机器学习和深度学习模型的比较评估。在12个物候阶段(V2-R6)中,MSRNet的总体准确率达到84.5%,比传统机器学习和单模式深度学习基准高出3.8 - 13.6%。Grad-CAM可视化证实了动态的、特定阶段的注意力,网络自动将注意力从营养生长期间的TOC叶片转移到籽粒灌浆期间的IOC生殖器官。这种集成无人机和UGV策略,加上MSRNet的动态特征选择能力,为高通量玉米表型和精确育种提供了一个全面的、可解释的工作流程。
{"title":"Maize phenological stage recognition via coordinated UAV and UGV multi-view sensing and deep learning","authors":"Jibo Yue ,&nbsp;Haikuan Feng ,&nbsp;Yiguang Fan ,&nbsp;Yang Liu ,&nbsp;Chunjiang Zhao ,&nbsp;Guijun Yang","doi":"10.1016/j.aiia.2025.12.004","DOIUrl":"10.1016/j.aiia.2025.12.004","url":null,"abstract":"<div><div>Crop phenological stages, marked by key events such as germination, leaf emergence, flowering, and senescence, are critical indicators of crop development. Accurate, dynamic monitoring of these stages is essential for crop breeding management. This study introduces a novel multi-view sensing strategy based on coordinated unmanned aerial vehicles (UAVs) and unmanned ground vehicles (UGVs), designed to capture diverse canopy perspectives for phenological stage recognition in maize. Our approach integrates multiple data streams from top-down and internal-horizontal views, acquired via UAV and UGV platforms, and consists of three main components: (i) Acquisition of maize canopy height data, top-of-canopy (TOC) digital images, canopy multispectral images, and inside-of-canopy (IOC) digital images using a UAV- and UGV-based multi-view system; (ii) Development of a multi-modal deep learning framework, MSRNet (maize-phenological stages recognition network), which fuses physiological features from the UAV and UGV sensor modalities, including canopy height, vegetation indices, TOC maize leaf images, and IOC maize cob images; (iii) Comparative evaluation of MSRNet against conventional machine learning and deep learning models. Across 12 phenological stages (V2–R6), MSRNet achieved 84.5 % overall accuracy, outperforming conventional machine learning and single-modality deep learning benchmarks by 3.8–13.6 %. Grad-CAM visualizations confirmed dynamic, stage-specific attention, with the network automatically shifting focus from TOC leaves during vegetative growth to IOC reproductive organs during grain filling. This integrated UAV and UGV strategy, coupled with the dynamic feature selection capability of MSRNet, provides a comprehensive, interpretable workflow for high-throughput maize phenotyping and precision breeding.</div></div>","PeriodicalId":52814,"journal":{"name":"Artificial Intelligence in Agriculture","volume":"16 1","pages":"Pages 643-657"},"PeriodicalIF":12.4,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145924243","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
CTGNN: UAV-satellite cross-domain transfer learning for monitoring oat growth in China’s key production areas CTGNN:用于监测中国重点产区燕麦生长的无人机-卫星跨域迁移学习
IF 12.4 Q1 AGRICULTURE, MULTIDISCIPLINARY Pub Date : 2026-03-01 Epub Date: 2025-12-29 DOI: 10.1016/j.aiia.2025.12.006
Pengpeng Zhang , Bing Lu , Jiali Shang , Changwei Tan , Shuchang Sun , Zhuo Xu , Junyong Ge , Yadong Yang , Huadong Zang , Zhaohai Zeng
Modern agricultural production necessitates real-time, precise monitoring of crop growth status to optimize management decisions. While remote sensing technologies offer multi-scale observational capabilities, conventional crop monitoring models face two critical limitations: (1) the independent retrieval of individual physiological traits, which overlooks the dynamic coupling between structural and physiological traits, and (2) inadequate cross-platform model transferability (e.g., from UAV images to satellite images), hindering the scaling of field-level precision to regional applications. To address these challenges, we proposed a deep learning-based framework, Cross-Task Growth Neural Network (CTGNN). This framework employed a dual-stream architecture to process spectral features for Leaf Area Index (LAI) and Soil Plant Analysis Development (SPAD), while using cross-trait attention mechanisms to capture their interactions. We further assessed the knowledge transfer capabilities of the model by comparing two transfer learning strategies—Transfer Component Analysis (TCA) and Domain-Adversarial Neural Networks (DANN)—in facilitating the adaptation of UAV-derived (1.3 cm/pixel) data to satellite-scale (3 m/pixel) monitoring. Validation using UAV-satellite synergetic datasets from extensively field-tested oat cultivars in China's Bashang Plateau demonstrates that CTGNN significantly reduces the prediction errors for LAI and SPAD compared with independent trait models, with RMSE reductions of 6.4–14.4 % and 10.5–15.6 %, respectively. In a cross-domain transfer learning scenario, the CTGNN model with the DANN strategy requires only 5 % of satellite-labeled data for fine-tuning to achieve regional-scale monitoring (LAI: R2 = 0.769; SPAD: R2 = 0.714). This framework provides a novel approach for the collaborative inversion of multiple crop growth traits, while its UAV-satellite cross-scale transfer capability facilitates optimal decision-making in oat variety breeding and cultivation technique dissemination, particularly in arid and semi-arid regions.
现代农业生产需要实时、精确地监测作物生长状况,以优化管理决策。虽然遥感技术提供了多尺度观测能力,但传统的作物监测模型面临两个关键限制:(1)单个生理性状的独立检索,忽略了结构与生理性状之间的动态耦合;(2)模型跨平台可移植性不足(例如,从无人机图像到卫星图像),阻碍了田间精度向区域应用的扩展。为了解决这些挑战,我们提出了一个基于深度学习的框架,交叉任务生长神经网络(CTGNN)。该框架采用双流架构处理叶面积指数(LAI)和土壤植物分析发展(SPAD)的光谱特征,同时利用交叉性状注意机制捕捉它们的相互作用。通过比较两种迁移学习策略——迁移成分分析(TCA)和域对抗神经网络(DANN),我们进一步评估了模型的知识迁移能力,以促进无人机衍生(1.3厘米/像素)数据对卫星尺度(3米/像素)监测的适应。利用中国巴上高原广泛田间试验的燕麦品种的无人机-卫星协同数据集进行验证,与独立性状模型相比,CTGNN显著降低了LAI和SPAD的预测误差,RMSE分别降低了6.4 - 14.4%和10.5 - 15.6%。在跨域迁移学习场景下,采用DANN策略的CTGNN模型只需要5%的卫星标记数据进行微调即可实现区域尺度的监测(LAI: R2 = 0.769; SPAD: R2 = 0.714)。该框架为多种作物生长性状的协同反演提供了一种新的方法,而其无人机-卫星跨尺度转移能力为燕麦品种育种和栽培技术推广提供了最优决策,特别是在干旱和半干旱地区。
{"title":"CTGNN: UAV-satellite cross-domain transfer learning for monitoring oat growth in China’s key production areas","authors":"Pengpeng Zhang ,&nbsp;Bing Lu ,&nbsp;Jiali Shang ,&nbsp;Changwei Tan ,&nbsp;Shuchang Sun ,&nbsp;Zhuo Xu ,&nbsp;Junyong Ge ,&nbsp;Yadong Yang ,&nbsp;Huadong Zang ,&nbsp;Zhaohai Zeng","doi":"10.1016/j.aiia.2025.12.006","DOIUrl":"10.1016/j.aiia.2025.12.006","url":null,"abstract":"<div><div>Modern agricultural production necessitates real-time, precise monitoring of crop growth status to optimize management decisions. While remote sensing technologies offer multi-scale observational capabilities, conventional crop monitoring models face two critical limitations: (1) the independent retrieval of individual physiological traits, which overlooks the dynamic coupling between structural and physiological traits, and (2) inadequate cross-platform model transferability (e.g., from UAV images to satellite images), hindering the scaling of field-level precision to regional applications. To address these challenges, we proposed a deep learning-based framework, Cross-Task Growth Neural Network (CTGNN). This framework employed a dual-stream architecture to process spectral features for Leaf Area Index (LAI) and Soil Plant Analysis Development (SPAD), while using cross-trait attention mechanisms to capture their interactions. We further assessed the knowledge transfer capabilities of the model by comparing two transfer learning strategies—Transfer Component Analysis (TCA) and Domain-Adversarial Neural Networks (DANN)—in facilitating the adaptation of UAV-derived (1.3 cm/pixel) data to satellite-scale (3 m/pixel) monitoring. Validation using UAV-satellite synergetic datasets from extensively field-tested oat cultivars in China's Bashang Plateau demonstrates that CTGNN significantly reduces the prediction errors for LAI and SPAD compared with independent trait models, with RMSE reductions of 6.4–14.4 % and 10.5–15.6 %, respectively. In a cross-domain transfer learning scenario, the CTGNN model with the DANN strategy requires only 5 % of satellite-labeled data for fine-tuning to achieve regional-scale monitoring (LAI: R2 = 0.769; SPAD: R2 = 0.714). This framework provides a novel approach for the collaborative inversion of multiple crop growth traits, while its UAV-satellite cross-scale transfer capability facilitates optimal decision-making in oat variety breeding and cultivation technique dissemination, particularly in arid and semi-arid regions.</div></div>","PeriodicalId":52814,"journal":{"name":"Artificial Intelligence in Agriculture","volume":"16 1","pages":"Pages 630-642"},"PeriodicalIF":12.4,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145924150","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Artificial Intelligence in Agriculture
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1