The increasing danger of insect pests to agriculture and ecosystems calls for quick, and precise diagnosis. Conventional techniques that depend on human observation and taxonomic knowledge are frequently labour-intensive and time-consuming. Incorporating artificial intelligence (AI) into detection has emerged as an effective approach in agriculture, including entomology. AI-based detection methods use machine learning, deep learning algorithms, and computer vision techniques to automate and improve the identification of insects. Deep learning algorithms, such as convolutional neural networks (CNNs), are primarily used for AI-powered insect pest identification by categorizing insects based on their visual features through image-based classification methodology. These methods have revolutionized insect identification by analyzing large databases of insect images and identifying distinct patterns and features linked to different species. AI-powered systems can improve insect pest identification by utilizing other data modalities. However, there are obstacles to overcome, such as the scarcity of high-quality labelled datasets and scalability and affordability issues. Despite these challenges, there is significant potential for AI-powered insect pest identification and pest management. Cooperation among researchers, practitioners, and policymakers is necessary to utilize AI in pest management fully. AI technology is transforming the field of entomology by enabling high-precision identification of insect pests, leading to more efficient and eco-friendly pest management strategies. This can enhance food safety and reduce the need for continuous insecticide spraying, ensuring the purity and safety of the food supply chains. This review updates AI-powered insect pest identification, covering its significance, methods, challenges, and prospects.
{"title":"Application of artificial intelligence in insect pest identification - A review","authors":"Sourav Chakrabarty , Chandan Kumar Deb , Sudeep Marwaha , Md. Ashraful Haque , Deeba Kamil , Raju Bheemanahalli , Pathour Rajendra Shashank","doi":"10.1016/j.aiia.2025.06.005","DOIUrl":"10.1016/j.aiia.2025.06.005","url":null,"abstract":"<div><div>The increasing danger of insect pests to agriculture and ecosystems calls for quick, and precise diagnosis. Conventional techniques that depend on human observation and taxonomic knowledge are frequently labour-intensive and time-consuming. Incorporating artificial intelligence (AI) into detection has emerged as an effective approach in agriculture, including entomology. AI-based detection methods use machine learning, deep learning algorithms, and computer vision techniques to automate and improve the identification of insects. Deep learning algorithms, such as convolutional neural networks (CNNs), are primarily used for AI-powered insect pest identification by categorizing insects based on their visual features through image-based classification methodology. These methods have revolutionized insect identification by analyzing large databases of insect images and identifying distinct patterns and features linked to different species. AI-powered systems can improve insect pest identification by utilizing other data modalities. However, there are obstacles to overcome, such as the scarcity of high-quality labelled datasets and scalability and affordability issues. Despite these challenges, there is significant potential for AI-powered insect pest identification and pest management. Cooperation among researchers, practitioners, and policymakers is necessary to utilize AI in pest management fully. AI technology is transforming the field of entomology by enabling high-precision identification of insect pests, leading to more efficient and eco-friendly pest management strategies. This can enhance food safety and reduce the need for continuous insecticide spraying, ensuring the purity and safety of the food supply chains. This review updates AI-powered insect pest identification, covering its significance, methods, challenges, and prospects.</div></div>","PeriodicalId":52814,"journal":{"name":"Artificial Intelligence in Agriculture","volume":"16 1","pages":"Pages 44-61"},"PeriodicalIF":12.4,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144773225","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-03-01Epub Date: 2025-10-24DOI: 10.1016/j.aiia.2025.10.010
Xiaoxin Li , Mingrui Cai , Zhen Liu , Chengcheng Yin , Xinjie Tan , Jiangtao Wen , Yuxing Han
Side-view imaging for monitoring free-range chickens on edge devices faces significant challenges due to complex backgrounds, occlusions, and limited computational resources, which particularly affect the performance of lightweight models in terms of their representational capacity and generalization ability. To address these limitations, this study proposes a Lightweight Free-range Chickens Detection Model based on YOLOv8n and knowledge distillation (LCD-YOLOv8n-KD), establishing an optimal balance between detection performance and model efficiency. The YOLOv8n architecture is enhanced by incorporating DualConv, CCFF, PCC2f, and SAHead modules to create LCD-YOLOv8n, significantly reducing model parameters and computational complexity. Further improvement is achieved through knowledge distillation, where a pre-trained large-scale model developed by our team as the teacher network, and LCD-YOLOv8n functioned as the student network, resulting in the LCD-YOLOv8n-KD model. Experimental validation is conducted using a comprehensive dataset comprising 6000 images with 162,864 labeled chicken targets, collected from various side-view angles in commercial farming environments. LCD-YOLOv8n-KD achieves AP50 values of 95.9 %, 90.2 %, 82.7 %, and 69.3 % on the test set and three independent test sets, respectively. Compared to the original YOLOv8n, the proposed model demonstrates a 16.13 % improvement in AP50 while reducing parameters by 47.84 % and GFLOPs by 41.46 %. The proposed model outperforms other state-of-the-art lightweight models in terms of detection efficiency, accuracy, and generalization capability, demonstrating strong potential for practical deployment in real-world free-range chicken farming environments.
{"title":"A lightweight model based on knowledge distillation for free-range chickens detection in complex commercial farming environments","authors":"Xiaoxin Li , Mingrui Cai , Zhen Liu , Chengcheng Yin , Xinjie Tan , Jiangtao Wen , Yuxing Han","doi":"10.1016/j.aiia.2025.10.010","DOIUrl":"10.1016/j.aiia.2025.10.010","url":null,"abstract":"<div><div>Side-view imaging for monitoring free-range chickens on edge devices faces significant challenges due to complex backgrounds, occlusions, and limited computational resources, which particularly affect the performance of lightweight models in terms of their representational capacity and generalization ability. To address these limitations, this study proposes a Lightweight Free-range Chickens Detection Model based on YOLOv8n and knowledge distillation (LCD-YOLOv8n-KD), establishing an optimal balance between detection performance and model efficiency. The YOLOv8n architecture is enhanced by incorporating DualConv, CCFF, PCC2f, and SAHead modules to create LCD-YOLOv8n, significantly reducing model parameters and computational complexity. Further improvement is achieved through knowledge distillation, where a pre-trained large-scale model developed by our team as the teacher network, and LCD-YOLOv8n functioned as the student network, resulting in the LCD-YOLOv8n-KD model. Experimental validation is conducted using a comprehensive dataset comprising 6000 images with 162,864 labeled chicken targets, collected from various side-view angles in commercial farming environments. LCD-YOLOv8n-KD achieves AP<sub>50</sub> values of 95.9 %, 90.2 %, 82.7 %, and 69.3 % on the test set and three independent test sets, respectively. Compared to the original YOLOv8n, the proposed model demonstrates a 16.13 % improvement in AP<sub>50</sub> while reducing parameters by 47.84 % and GFLOPs by 41.46 %. The proposed model outperforms other state-of-the-art lightweight models in terms of detection efficiency, accuracy, and generalization capability, demonstrating strong potential for practical deployment in real-world free-range chicken farming environments.</div></div>","PeriodicalId":52814,"journal":{"name":"Artificial Intelligence in Agriculture","volume":"16 1","pages":"Pages 266-283"},"PeriodicalIF":12.4,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145416107","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-03-01Epub Date: 2025-10-03DOI: 10.1016/j.aiia.2025.10.001
Zhijian Chen , Jianjun Yin , Sheikh Muhammad Farhan , Lu Liu , Ding Zhang , Maile Zhou , Junhui Cheng
As automation becomes increasingly adopted to mitigate labor shortages and boost productivity, autonomous technologies such as tractors, drones, and robotic devices are being utilized for various tasks that include plowing, seeding, irrigation, fertilization, and harvesting. Successfully navigating these changing agricultural landscapes necessitates advanced sensing, control, and navigation systems that can adapt in real time to guarantee effective and safe operations. This review focuses on obstacle avoidance systems in autonomous farming machinery, highlighting multi-functional capabilities within intricate field settings. It analyzes various sensing technologies, LiDAR, visual cameras, radar, ultrasonic sensors, GPS/GNSS, and inertial measurement units (IMU) for their individual and collective contributions to precise obstacle detection in fluctuating field conditions. The review examines the potential of multi-sensor fusion to enhance detection accuracy and reliability, with a particular emphasizing on achieving seamless obstacle recognition and response. It addresses recent advancements in control and navigation systems, particularly focusing on path-planning algorithms and real-time decision-making. It enables autonomous systems to adjust dynamically across multi-functional agricultural environments. The methodologies used for path planning, including adaptive and learning-based strategies, are discussed for their ability to optimize navigation in complicated field conditions. Real-time decision-making frameworks are similarly evaluated for their capacity to provide prompt, data-driven reactions to changing obstacles, which is critical for maintaining operational efficiency. Moreover, this review discusses environmental and topographical challenges like variable terrain, unpredictable weather, complex crop arrangements, and interference from co-located machinery that hinder obstacle detection and necessitate adaptive, resilient system responses. In addition, the paper emphasizes future research opportunities, highlighting the significance of advancements in multi-sensor fusion, deep learning for perception, adaptive path planning, model-free control strategies, artificial intelligence, and energy-efficient designs. Enhancing obstacle avoidance systems enables autonomous agricultural machinery to transform modern farming by increasing efficiency, precision, and sustainability. The review highlights the potential of these technologies to support global efforts for sustainable agriculture and food security, aligning agricultural innovation with the needs of a swiftly growing population.
{"title":"A comprehensive review of obstacle avoidance for autonomous agricultural machinery in multi-operational environment","authors":"Zhijian Chen , Jianjun Yin , Sheikh Muhammad Farhan , Lu Liu , Ding Zhang , Maile Zhou , Junhui Cheng","doi":"10.1016/j.aiia.2025.10.001","DOIUrl":"10.1016/j.aiia.2025.10.001","url":null,"abstract":"<div><div>As automation becomes increasingly adopted to mitigate labor shortages and boost productivity, autonomous technologies such as tractors, drones, and robotic devices are being utilized for various tasks that include plowing, seeding, irrigation, fertilization, and harvesting. Successfully navigating these changing agricultural landscapes necessitates advanced sensing, control, and navigation systems that can adapt in real time to guarantee effective and safe operations. This review focuses on obstacle avoidance systems in autonomous farming machinery, highlighting multi-functional capabilities within intricate field settings. It analyzes various sensing technologies, LiDAR, visual cameras, radar, ultrasonic sensors, GPS/GNSS, and inertial measurement units (IMU) for their individual and collective contributions to precise obstacle detection in fluctuating field conditions. The review examines the potential of multi-sensor fusion to enhance detection accuracy and reliability, with a particular emphasizing on achieving seamless obstacle recognition and response. It addresses recent advancements in control and navigation systems, particularly focusing on path-planning algorithms and real-time decision-making. It enables autonomous systems to adjust dynamically across multi-functional agricultural environments. The methodologies used for path planning, including adaptive and learning-based strategies, are discussed for their ability to optimize navigation in complicated field conditions. Real-time decision-making frameworks are similarly evaluated for their capacity to provide prompt, data-driven reactions to changing obstacles, which is critical for maintaining operational efficiency. Moreover, this review discusses environmental and topographical challenges like variable terrain, unpredictable weather, complex crop arrangements, and interference from co-located machinery that hinder obstacle detection and necessitate adaptive, resilient system responses. In addition, the paper emphasizes future research opportunities, highlighting the significance of advancements in multi-sensor fusion, deep learning for perception, adaptive path planning, model-free control strategies, artificial intelligence, and energy-efficient designs. Enhancing obstacle avoidance systems enables autonomous agricultural machinery to transform modern farming by increasing efficiency, precision, and sustainability. The review highlights the potential of these technologies to support global efforts for sustainable agriculture and food security, aligning agricultural innovation with the needs of a swiftly growing population.</div></div>","PeriodicalId":52814,"journal":{"name":"Artificial Intelligence in Agriculture","volume":"16 1","pages":"Pages 139-163"},"PeriodicalIF":12.4,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145264762","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-03-01Epub Date: 2025-10-02DOI: 10.1016/j.aiia.2025.10.003
Liuyan Feng , Changsu Xu , Han Tang , Zhongcai Wei , Xiaodong Guan , Jingcheng Xu , Mingjin Yang , Yunwu Li
With the rapid advancement of information technology, the intelligent and unmanned applications of agricultural machinery and equipment have become a central focus of current research. Navigation technology is central to achieving autonomous driving in agricultural machinery and plays a key role in advancing intelligent agriculture. However, although some studies have reviewed aspects of agricultural machinery navigation technologies, a comprehensive and systematic overview that clearly outlines the developmental trajectory of these technologies is still lacking. At the same time, there is an urgent need to break through traditional navigation frameworks to address the challenges posed by complex agricultural environments. Addressing this gap, this study provides a comprehensive overview of the evolution of navigation technologies in agricultural machinery, categorizing them into three stages: assisted navigation, autonomous navigation, and intelligent navigation, based on the level of autonomy in agricultural machinery. Special emphasis is placed on the brain-inspired navigation technology, which is an important branch of intelligent navigation and has attracted widespread attention as an emerging direction. It innovatively mimics the cognitive and learning abilities of the brain, demonstrating high adaptability and robustness to better handle uncertainty and complex environments. Importantly, this paper innovatively explores six potential applications of brain-inspired navigation technology in the agricultural field, highlighting its significant potential to enhance the intelligence of agricultural machinery. The review concludes by discussing current limitations and future research directions. The findings of this study aim to guide the development of more intelligent, adaptive, and resilient navigation systems, accelerating the transformation toward fully autonomous agricultural operations.
{"title":"Application of navigation technology in agricultural machinery: A review and prospects","authors":"Liuyan Feng , Changsu Xu , Han Tang , Zhongcai Wei , Xiaodong Guan , Jingcheng Xu , Mingjin Yang , Yunwu Li","doi":"10.1016/j.aiia.2025.10.003","DOIUrl":"10.1016/j.aiia.2025.10.003","url":null,"abstract":"<div><div>With the rapid advancement of information technology, the intelligent and unmanned applications of agricultural machinery and equipment have become a central focus of current research. Navigation technology is central to achieving autonomous driving in agricultural machinery and plays a key role in advancing intelligent agriculture. However, although some studies have reviewed aspects of agricultural machinery navigation technologies, a comprehensive and systematic overview that clearly outlines the developmental trajectory of these technologies is still lacking. At the same time, there is an urgent need to break through traditional navigation frameworks to address the challenges posed by complex agricultural environments. Addressing this gap, this study provides a comprehensive overview of the evolution of navigation technologies in agricultural machinery, categorizing them into three stages: assisted navigation, autonomous navigation, and intelligent navigation, based on the level of autonomy in agricultural machinery. Special emphasis is placed on the brain-inspired navigation technology, which is an important branch of intelligent navigation and has attracted widespread attention as an emerging direction. It innovatively mimics the cognitive and learning abilities of the brain, demonstrating high adaptability and robustness to better handle uncertainty and complex environments. Importantly, this paper innovatively explores six potential applications of brain-inspired navigation technology in the agricultural field, highlighting its significant potential to enhance the intelligence of agricultural machinery. The review concludes by discussing current limitations and future research directions. The findings of this study aim to guide the development of more intelligent, adaptive, and resilient navigation systems, accelerating the transformation toward fully autonomous agricultural operations.</div></div>","PeriodicalId":52814,"journal":{"name":"Artificial Intelligence in Agriculture","volume":"16 1","pages":"Pages 94-123"},"PeriodicalIF":12.4,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145264764","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-03-01Epub Date: 2025-11-22DOI: 10.1016/j.aiia.2025.11.007
Weinan Chen , Guijun Yang , Yang Meng , Haikuan Feng , Hongrui Wen , Aohua Tang , Jing Zhang , Hao Yang , Heli Li , Xingang Xu , Changchun Li , Zhenhong Li
Timely and accurate prediction of stem dry biomass (SDB) is crucial for monitoring crop growing status. However, conventional biomass estimation models are often limited by the influence of crop growth phase, which significantly restricts their temporal and spatial transferability. This study aimed to develop a semi-mechanistic stem biomass prediction model (PVWheat-SDB) using phenological variable (PV) to accurately predict winter wheat SDB across different growth stages. The core of the model is to predict SDB using PV under remote-sensed canopy vegetation indices (VIs) constraint. The results demonstrated that VIs can quantify the variations in stem growth equations under different planting conditions and varieties. The developed a PVWheat-SDB model using normalized difference red edge (NDRE) and accumulated growing degree days (AGDD) performed well for SDB predictions, with R2, RMSE, nRMSE and MAE values of 0.88, 75.48 g/m2, 8.04 % and 55.36 g/m2 for the validation datasets of field spectral reflectance, and 0.82, 81.76 g/m2, 11.22 % and 62.82 g/m2 when transferred to unmanned aerial vehicle (UAV) hyperspectral images. Furthermore, the model can not only estimate SDB at the current growth stage, but also predict SDB of subsequent phenological stages. The growth stage stacking strategy indicated that the model accuracy improves significantly as more growth stages are incorporated, especially during the reproductive stages. These results all highlight the robustness and transferability of the PVWheat-SDB model in accurately predicting SDB across different growing seasons and growth stages.
{"title":"Prediction of wheat stem biomass using a new unified model driven by phenological variable under remote-sensed canopy vegetation index constraints","authors":"Weinan Chen , Guijun Yang , Yang Meng , Haikuan Feng , Hongrui Wen , Aohua Tang , Jing Zhang , Hao Yang , Heli Li , Xingang Xu , Changchun Li , Zhenhong Li","doi":"10.1016/j.aiia.2025.11.007","DOIUrl":"10.1016/j.aiia.2025.11.007","url":null,"abstract":"<div><div>Timely and accurate prediction of stem dry biomass (SDB) is crucial for monitoring crop growing status. However, conventional biomass estimation models are often limited by the influence of crop growth phase, which significantly restricts their temporal and spatial transferability. This study aimed to develop a semi-mechanistic stem biomass prediction model (PVWheat-SDB) using phenological variable (PV) to accurately predict winter wheat SDB across different growth stages. The core of the model is to predict SDB using PV under remote-sensed canopy vegetation indices (VIs) constraint. The results demonstrated that VIs can quantify the variations in stem growth equations under different planting conditions and varieties. The developed a PVWheat-SDB model using normalized difference red edge (NDRE) and accumulated growing degree days (AGDD) performed well for SDB predictions, with R<sup>2</sup>, RMSE, nRMSE and MAE values of 0.88, 75.48 g/m<sup>2</sup>, 8.04 % and 55.36 g/m<sup>2</sup> for the validation datasets of field spectral reflectance, and 0.82, 81.76 g/m<sup>2</sup>, 11.22 % and 62.82 g/m<sup>2</sup> when transferred to unmanned aerial vehicle (UAV) hyperspectral images. Furthermore, the model can not only estimate SDB at the current growth stage, but also predict SDB of subsequent phenological stages. The growth stage stacking strategy indicated that the model accuracy improves significantly as more growth stages are incorporated, especially during the reproductive stages. These results all highlight the robustness and transferability of the PVWheat-SDB model in accurately predicting SDB across different growing seasons and growth stages.</div></div>","PeriodicalId":52814,"journal":{"name":"Artificial Intelligence in Agriculture","volume":"16 1","pages":"Pages 658-671"},"PeriodicalIF":12.4,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145924151","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-03-01Epub Date: 2025-10-27DOI: 10.1016/j.aiia.2025.10.012
Hui Wang , Chao Ruan , Jinling Zhao , Yunran Wang , Ying Li , Yingying Dong , Linsheng Huang
Wheat Fusarium head blight (FHB) severely affects wheat yields, and predicting its occurrence and spatial distribution is essential for safeguarding crop production. This study presents an interpretable machine learning method designed to predict FHB by leveraging multi-temporal and multi-feature information obtained from Sentinel-2 imagery. During the regreening and grain-filling stages, we extracted vegetation indices (VIs), texture features (TFs), and color indices (CIs). Single-temporal features were derived from the grain-filling stage, while multi-temporal features combined data from grain-filling and regreening stages. The synthetic minority over-sampling technique (SMOTE) was employed to adjust the class imbalance, while the most significant characteristics were found using the sequential forward selection (SFS) approach. The extreme gradient boosting (XGBoost) model, optimized using simulated annealing (SA) algorithm and explained via SHapley Additive exPlanation (SHAP) method, integrated VIs, TFs, and CIs as input features. The presented model demonstrated exceptional results, achieving a prediction accuracy of 89.9 % across multi-temporal and a Kappa coefficient of 0.797. It outperformed random forests (RF), backpropagation neural networks (BPNN), and support vector machines (SVM) model. This study indicates that an interpretable machine learning approach, which utilizes both multi-temporal and multi-feature data, is effective in forecasting FHB, thereby providing a valuable tool for agricultural management and disease prevention strategies.
{"title":"Utilizing interpretable machine learning algorithms and multiple features from multi-temporal Sentinel-2 imagery for predicting wheat fusarium head blight","authors":"Hui Wang , Chao Ruan , Jinling Zhao , Yunran Wang , Ying Li , Yingying Dong , Linsheng Huang","doi":"10.1016/j.aiia.2025.10.012","DOIUrl":"10.1016/j.aiia.2025.10.012","url":null,"abstract":"<div><div>Wheat <em>Fusarium</em> head blight (FHB) severely affects wheat yields, and predicting its occurrence and spatial distribution is essential for safeguarding crop production. This study presents an interpretable machine learning method designed to predict FHB by leveraging multi-temporal and multi-feature information obtained from Sentinel-2 imagery. During the regreening and grain-filling stages, we extracted vegetation indices (VIs), texture features (TFs), and color indices (CIs). Single-temporal features were derived from the grain-filling stage, while multi-temporal features combined data from grain-filling and regreening stages. The synthetic minority over-sampling technique (SMOTE) was employed to adjust the class imbalance, while the most significant characteristics were found using the sequential forward selection (SFS) approach. The extreme gradient boosting (XGBoost) model, optimized using simulated annealing (SA) algorithm and explained via SHapley Additive exPlanation (SHAP) method, integrated VIs, TFs, and CIs as input features. The presented model demonstrated exceptional results, achieving a prediction accuracy of 89.9 % across multi-temporal and a Kappa coefficient of 0.797. It outperformed random forests (RF), backpropagation neural networks (BPNN), and support vector machines (SVM) model. This study indicates that an interpretable machine learning approach, which utilizes both multi-temporal and multi-feature data, is effective in forecasting FHB, thereby providing a valuable tool for agricultural management and disease prevention strategies.</div></div>","PeriodicalId":52814,"journal":{"name":"Artificial Intelligence in Agriculture","volume":"16 1","pages":"Pages 224-239"},"PeriodicalIF":12.4,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145416196","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-03-01Epub Date: 2025-12-06DOI: 10.1016/j.aiia.2025.12.001
Honghao Zhou , Bingxi Qin , Qing Li , Wenlong Su , Shaowei Liang , Haijiang Min , Jingrong Zang , Shichao Jin , Dong Jiang , Jiawei Chen
Automated phenotyping of wheat growth stages from 3D point clouds is still limited. The study presents a concise framework that reconstructs multi-view UAS imagery into 3D point clouds (jointing to maturity) and performs plot-level phenotyping. A novel 3D wheat plot detection network—integrating spatial–channel coordinated attention and area attention modules—improves depth-direction feature recognition, and a point-cloud-density-based row segmentation algorithm enables planting-row-scale plot delineation. A supporting software system facilitates 3D visualization and automated extraction of phenotypic parameters. We introduce a dynamic phenotypic index of five temporal metrics (growth stage, slow growth stage, height/area reduction stage, maximum height/area difference stage, and height/area change rate) for growth-stage classification and yield prediction using static and time-series models. Experiments show strong agreement between predicted and measured plot heights (R2 = 0.937); the detection net achieved AP3D = 94.15 % and APBEV = 95.35 % in “easy” mode; and a Bi-LSTM incorporating dynamic traits reached 82.37 % prediction accuracy for leaf area and yield, a 6.14 % improvement over static-trait models. This workflow supports high-throughput 3D phenotyping and reliable yield estimation for precision agriculture.
{"title":"Integrating 3D detection networks and dynamic temporal phenotyping for wheat yield classification and prediction","authors":"Honghao Zhou , Bingxi Qin , Qing Li , Wenlong Su , Shaowei Liang , Haijiang Min , Jingrong Zang , Shichao Jin , Dong Jiang , Jiawei Chen","doi":"10.1016/j.aiia.2025.12.001","DOIUrl":"10.1016/j.aiia.2025.12.001","url":null,"abstract":"<div><div>Automated phenotyping of wheat growth stages from 3D point clouds is still limited. The study presents a concise framework that reconstructs multi-view UAS imagery into 3D point clouds (jointing to maturity) and performs plot-level phenotyping. A novel 3D wheat plot detection network—integrating spatial–channel coordinated attention and area attention modules—improves depth-direction feature recognition, and a point-cloud-density-based row segmentation algorithm enables planting-row-scale plot delineation. A supporting software system facilitates 3D visualization and automated extraction of phenotypic parameters. We introduce a dynamic phenotypic index of five temporal metrics (growth stage, slow growth stage, height/area reduction stage, maximum height/area difference stage, and height/area change rate) for growth-stage classification and yield prediction using static and time-series models. Experiments show strong agreement between predicted and measured plot heights (R<sup>2</sup> = 0.937); the detection net achieved AP<sub>3D</sub> = 94.15 % and AP<sub>BEV</sub> = 95.35 % in “easy” mode; and a Bi-LSTM incorporating dynamic traits reached 82.37 % prediction accuracy for leaf area and yield, a 6.14 % improvement over static-trait models. This workflow supports high-throughput 3D phenotyping and reliable yield estimation for precision agriculture.</div></div>","PeriodicalId":52814,"journal":{"name":"Artificial Intelligence in Agriculture","volume":"16 1","pages":"Pages 603-618"},"PeriodicalIF":12.4,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145747432","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-03-01Epub Date: 2025-10-30DOI: 10.1016/j.aiia.2025.10.015
Xin Yang , Chenyi Xu , Yan Wang, Ruixia Feng, Jinshi Yu, Zichen Su, Teng Miao, Tongyu Xu
Accurately extracting plant point clouds from complex agricultural environments is essential for high-throughput phenotyping in smart farming. However, existing methods face significant challenges when processing large-scale agricultural point clouds owing to high noise levels, dense spatial distribution, and blurred structural boundaries between plant and non-plant regions. To address these issues, this study proposes PlaneSegNet, a voxel-based semantic segmentation network that incorporates an innovative plane attention module. This module aggregates projection features from the XZ and YZ planes, enhancing the model's ability to detect vertical geometric variations and thereby improving segmentation performance in boundary regions. Extensive experiments across representative agricultural scenarios at multiple scales, including open-field populations, greenhouse cultivation environments, and large-scale rural landscapes, demonstrate that PlaneSegNet significantly outperforms traditional geometry-based approaches and deep-learning models in plant and non-plant separation. By directly generating high-quality plant-only point clouds, PlaneSegNet significantly reduces reliance on manual pre-processing, offering a practical and generalisable solution for automated plant extraction across a wide range of agricultural applications. The dataset and source code used in this study are publicly available at https://github.com/yangxin6/PlaneSegNet.
{"title":"PlaneSegNet: A deep learning network with plane attention for plant point cloud segmentation in agricultural environments","authors":"Xin Yang , Chenyi Xu , Yan Wang, Ruixia Feng, Jinshi Yu, Zichen Su, Teng Miao, Tongyu Xu","doi":"10.1016/j.aiia.2025.10.015","DOIUrl":"10.1016/j.aiia.2025.10.015","url":null,"abstract":"<div><div>Accurately extracting plant point clouds from complex agricultural environments is essential for high-throughput phenotyping in smart farming. However, existing methods face significant challenges when processing large-scale agricultural point clouds owing to high noise levels, dense spatial distribution, and blurred structural boundaries between plant and non-plant regions. To address these issues, this study proposes PlaneSegNet, a voxel-based semantic segmentation network that incorporates an innovative plane attention module. This module aggregates projection features from the XZ and YZ planes, enhancing the model's ability to detect vertical geometric variations and thereby improving segmentation performance in boundary regions. Extensive experiments across representative agricultural scenarios at multiple scales, including open-field populations, greenhouse cultivation environments, and large-scale rural landscapes, demonstrate that PlaneSegNet significantly outperforms traditional geometry-based approaches and deep-learning models in plant and non-plant separation. By directly generating high-quality plant-only point clouds, PlaneSegNet significantly reduces reliance on manual pre-processing, offering a practical and generalisable solution for automated plant extraction across a wide range of agricultural applications. The dataset and source code used in this study are publicly available at <span><span>https://github.com/yangxin6/PlaneSegNet</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":52814,"journal":{"name":"Artificial Intelligence in Agriculture","volume":"16 1","pages":"Pages 284-299"},"PeriodicalIF":12.4,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145465552","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-03-01Epub Date: 2025-12-30DOI: 10.1016/j.aiia.2025.12.004
Jibo Yue , Haikuan Feng , Yiguang Fan , Yang Liu , Chunjiang Zhao , Guijun Yang
Crop phenological stages, marked by key events such as germination, leaf emergence, flowering, and senescence, are critical indicators of crop development. Accurate, dynamic monitoring of these stages is essential for crop breeding management. This study introduces a novel multi-view sensing strategy based on coordinated unmanned aerial vehicles (UAVs) and unmanned ground vehicles (UGVs), designed to capture diverse canopy perspectives for phenological stage recognition in maize. Our approach integrates multiple data streams from top-down and internal-horizontal views, acquired via UAV and UGV platforms, and consists of three main components: (i) Acquisition of maize canopy height data, top-of-canopy (TOC) digital images, canopy multispectral images, and inside-of-canopy (IOC) digital images using a UAV- and UGV-based multi-view system; (ii) Development of a multi-modal deep learning framework, MSRNet (maize-phenological stages recognition network), which fuses physiological features from the UAV and UGV sensor modalities, including canopy height, vegetation indices, TOC maize leaf images, and IOC maize cob images; (iii) Comparative evaluation of MSRNet against conventional machine learning and deep learning models. Across 12 phenological stages (V2–R6), MSRNet achieved 84.5 % overall accuracy, outperforming conventional machine learning and single-modality deep learning benchmarks by 3.8–13.6 %. Grad-CAM visualizations confirmed dynamic, stage-specific attention, with the network automatically shifting focus from TOC leaves during vegetative growth to IOC reproductive organs during grain filling. This integrated UAV and UGV strategy, coupled with the dynamic feature selection capability of MSRNet, provides a comprehensive, interpretable workflow for high-throughput maize phenotyping and precision breeding.
{"title":"Maize phenological stage recognition via coordinated UAV and UGV multi-view sensing and deep learning","authors":"Jibo Yue , Haikuan Feng , Yiguang Fan , Yang Liu , Chunjiang Zhao , Guijun Yang","doi":"10.1016/j.aiia.2025.12.004","DOIUrl":"10.1016/j.aiia.2025.12.004","url":null,"abstract":"<div><div>Crop phenological stages, marked by key events such as germination, leaf emergence, flowering, and senescence, are critical indicators of crop development. Accurate, dynamic monitoring of these stages is essential for crop breeding management. This study introduces a novel multi-view sensing strategy based on coordinated unmanned aerial vehicles (UAVs) and unmanned ground vehicles (UGVs), designed to capture diverse canopy perspectives for phenological stage recognition in maize. Our approach integrates multiple data streams from top-down and internal-horizontal views, acquired via UAV and UGV platforms, and consists of three main components: (i) Acquisition of maize canopy height data, top-of-canopy (TOC) digital images, canopy multispectral images, and inside-of-canopy (IOC) digital images using a UAV- and UGV-based multi-view system; (ii) Development of a multi-modal deep learning framework, MSRNet (maize-phenological stages recognition network), which fuses physiological features from the UAV and UGV sensor modalities, including canopy height, vegetation indices, TOC maize leaf images, and IOC maize cob images; (iii) Comparative evaluation of MSRNet against conventional machine learning and deep learning models. Across 12 phenological stages (V2–R6), MSRNet achieved 84.5 % overall accuracy, outperforming conventional machine learning and single-modality deep learning benchmarks by 3.8–13.6 %. Grad-CAM visualizations confirmed dynamic, stage-specific attention, with the network automatically shifting focus from TOC leaves during vegetative growth to IOC reproductive organs during grain filling. This integrated UAV and UGV strategy, coupled with the dynamic feature selection capability of MSRNet, provides a comprehensive, interpretable workflow for high-throughput maize phenotyping and precision breeding.</div></div>","PeriodicalId":52814,"journal":{"name":"Artificial Intelligence in Agriculture","volume":"16 1","pages":"Pages 643-657"},"PeriodicalIF":12.4,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145924243","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-03-01Epub Date: 2025-12-29DOI: 10.1016/j.aiia.2025.12.006
Pengpeng Zhang , Bing Lu , Jiali Shang , Changwei Tan , Shuchang Sun , Zhuo Xu , Junyong Ge , Yadong Yang , Huadong Zang , Zhaohai Zeng
Modern agricultural production necessitates real-time, precise monitoring of crop growth status to optimize management decisions. While remote sensing technologies offer multi-scale observational capabilities, conventional crop monitoring models face two critical limitations: (1) the independent retrieval of individual physiological traits, which overlooks the dynamic coupling between structural and physiological traits, and (2) inadequate cross-platform model transferability (e.g., from UAV images to satellite images), hindering the scaling of field-level precision to regional applications. To address these challenges, we proposed a deep learning-based framework, Cross-Task Growth Neural Network (CTGNN). This framework employed a dual-stream architecture to process spectral features for Leaf Area Index (LAI) and Soil Plant Analysis Development (SPAD), while using cross-trait attention mechanisms to capture their interactions. We further assessed the knowledge transfer capabilities of the model by comparing two transfer learning strategies—Transfer Component Analysis (TCA) and Domain-Adversarial Neural Networks (DANN)—in facilitating the adaptation of UAV-derived (1.3 cm/pixel) data to satellite-scale (3 m/pixel) monitoring. Validation using UAV-satellite synergetic datasets from extensively field-tested oat cultivars in China's Bashang Plateau demonstrates that CTGNN significantly reduces the prediction errors for LAI and SPAD compared with independent trait models, with RMSE reductions of 6.4–14.4 % and 10.5–15.6 %, respectively. In a cross-domain transfer learning scenario, the CTGNN model with the DANN strategy requires only 5 % of satellite-labeled data for fine-tuning to achieve regional-scale monitoring (LAI: R2 = 0.769; SPAD: R2 = 0.714). This framework provides a novel approach for the collaborative inversion of multiple crop growth traits, while its UAV-satellite cross-scale transfer capability facilitates optimal decision-making in oat variety breeding and cultivation technique dissemination, particularly in arid and semi-arid regions.
{"title":"CTGNN: UAV-satellite cross-domain transfer learning for monitoring oat growth in China’s key production areas","authors":"Pengpeng Zhang , Bing Lu , Jiali Shang , Changwei Tan , Shuchang Sun , Zhuo Xu , Junyong Ge , Yadong Yang , Huadong Zang , Zhaohai Zeng","doi":"10.1016/j.aiia.2025.12.006","DOIUrl":"10.1016/j.aiia.2025.12.006","url":null,"abstract":"<div><div>Modern agricultural production necessitates real-time, precise monitoring of crop growth status to optimize management decisions. While remote sensing technologies offer multi-scale observational capabilities, conventional crop monitoring models face two critical limitations: (1) the independent retrieval of individual physiological traits, which overlooks the dynamic coupling between structural and physiological traits, and (2) inadequate cross-platform model transferability (e.g., from UAV images to satellite images), hindering the scaling of field-level precision to regional applications. To address these challenges, we proposed a deep learning-based framework, Cross-Task Growth Neural Network (CTGNN). This framework employed a dual-stream architecture to process spectral features for Leaf Area Index (LAI) and Soil Plant Analysis Development (SPAD), while using cross-trait attention mechanisms to capture their interactions. We further assessed the knowledge transfer capabilities of the model by comparing two transfer learning strategies—Transfer Component Analysis (TCA) and Domain-Adversarial Neural Networks (DANN)—in facilitating the adaptation of UAV-derived (1.3 cm/pixel) data to satellite-scale (3 m/pixel) monitoring. Validation using UAV-satellite synergetic datasets from extensively field-tested oat cultivars in China's Bashang Plateau demonstrates that CTGNN significantly reduces the prediction errors for LAI and SPAD compared with independent trait models, with RMSE reductions of 6.4–14.4 % and 10.5–15.6 %, respectively. In a cross-domain transfer learning scenario, the CTGNN model with the DANN strategy requires only 5 % of satellite-labeled data for fine-tuning to achieve regional-scale monitoring (LAI: R2 = 0.769; SPAD: R2 = 0.714). This framework provides a novel approach for the collaborative inversion of multiple crop growth traits, while its UAV-satellite cross-scale transfer capability facilitates optimal decision-making in oat variety breeding and cultivation technique dissemination, particularly in arid and semi-arid regions.</div></div>","PeriodicalId":52814,"journal":{"name":"Artificial Intelligence in Agriculture","volume":"16 1","pages":"Pages 630-642"},"PeriodicalIF":12.4,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145924150","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}