首页 > 最新文献

Artificial Intelligence in Agriculture最新文献

英文 中文
AI-driven aquaculture: A review of technological innovations and their sustainable impacts 人工智能驱动的水产养殖:技术创新及其可持续影响综述
IF 8.2 Q1 AGRICULTURE, MULTIDISCIPLINARY Pub Date : 2025-02-06 DOI: 10.1016/j.aiia.2025.01.012
Hang Yang , Qi Feng , Shibin Xia , Zhenbin Wu , Yi Zhang
The integration of artificial intelligence (AI) in aquaculture has been identified as a transformative force, enhancing various operational aspects from water quality management to genetic optimization. This review provides a comprehensive synthesis of recent advancements in AI applications within the aquaculture sector, underscoring the significant enhancements in production efficiency and environmental sustainability. Key AI-driven improvements, such as predictive analytics for disease management and optimized feeding protocols, are highlighted, demonstrating their contributions to reducing waste and improving biomass outputs. However, challenges remain in terms of data quality, system integration, and the socio-economic impacts of technological adoption across diverse aquacultural environments. This review also addresses the gaps in current research, particularly the lack of robust, scalable AI models and frameworks that can be universally applied. Future directions are discussed, emphasizing the need for interdisciplinary research and development to fully leverage AI potential in aquaculture. This study not only maps the current landscape of AI applications but also serves as a call for continued innovation and strategic collaborations to overcome existing barriers and realize the full benefits of AI in aquaculture.
人工智能(AI)在水产养殖中的整合已被确定为一种变革力量,可以增强从水质管理到遗传优化的各个操作方面。本综述全面综合了人工智能在水产养殖部门应用方面的最新进展,强调了生产效率和环境可持续性的显著提高。重点介绍了人工智能驱动的关键改进,如疾病管理的预测分析和优化的饲养方案,展示了它们对减少浪费和提高生物质产量的贡献。然而,在数据质量、系统集成以及在不同水产养殖环境中采用技术的社会经济影响方面仍然存在挑战。这篇综述还解决了当前研究中的差距,特别是缺乏可以普遍应用的健壮的、可扩展的人工智能模型和框架。讨论了未来的发展方向,强调需要跨学科的研究和开发,以充分利用人工智能在水产养殖中的潜力。这项研究不仅描绘了人工智能应用的现状,而且还呼吁继续创新和战略合作,以克服现有障碍,实现人工智能在水产养殖中的全部效益。
{"title":"AI-driven aquaculture: A review of technological innovations and their sustainable impacts","authors":"Hang Yang ,&nbsp;Qi Feng ,&nbsp;Shibin Xia ,&nbsp;Zhenbin Wu ,&nbsp;Yi Zhang","doi":"10.1016/j.aiia.2025.01.012","DOIUrl":"10.1016/j.aiia.2025.01.012","url":null,"abstract":"<div><div>The integration of artificial intelligence (AI) in aquaculture has been identified as a transformative force, enhancing various operational aspects from water quality management to genetic optimization. This review provides a comprehensive synthesis of recent advancements in AI applications within the aquaculture sector, underscoring the significant enhancements in production efficiency and environmental sustainability. Key AI-driven improvements, such as predictive analytics for disease management and optimized feeding protocols, are highlighted, demonstrating their contributions to reducing waste and improving biomass outputs. However, challenges remain in terms of data quality, system integration, and the socio-economic impacts of technological adoption across diverse aquacultural environments. This review also addresses the gaps in current research, particularly the lack of robust, scalable AI models and frameworks that can be universally applied. Future directions are discussed, emphasizing the need for interdisciplinary research and development to fully leverage AI potential in aquaculture. This study not only maps the current landscape of AI applications but also serves as a call for continued innovation and strategic collaborations to overcome existing barriers and realize the full benefits of AI in aquaculture.</div></div>","PeriodicalId":52814,"journal":{"name":"Artificial Intelligence in Agriculture","volume":"15 3","pages":"Pages 508-525"},"PeriodicalIF":8.2,"publicationDate":"2025-02-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144106613","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An efficient strawberry segmentation model based on Mask R-CNN and TensorRT 基于Mask R-CNN和TensorRT的高效草莓分割模型
IF 8.2 Q1 AGRICULTURE, MULTIDISCIPLINARY Pub Date : 2025-02-03 DOI: 10.1016/j.aiia.2025.01.008
Anthony Crespo , Claudia Moncada , Fabricio Crespo , Manuel Eugenio Morocho-Cayamcela
Currently, artificial intelligence (AI), particularly computer vision (CV), has numerous applications in agriculture. In this field, the production and consumption of strawberries have experienced great growth in recent years, which makes meeting the growing demand a challenge that producers must face. However, one of the main problems regarding the cultivation of this fruit is the high cost and long picking times. In response, automatic harvesting has surged as an option to address this difficulty, and fruit instance segmentation plays a crucial role in these types of systems. Fruit segmentation is related to the identification and separation of individual fruits within a crop, allowing a more efficient and accurate harvesting process. Although deep learning (DL) techniques have shown potential for this activity, the complexity of the models leads to difficulty in their implementation in real-time systems. For this reason, a model capable of performing adequately in real-time, while also having good precision is of great interest. With this motivation, this work presents a efficient Mask R-CNN model to perform instance segmentation in strawberry fruits. The efficiency of the model is assessed considering the amount of frames per second (FPS) it can process, its size in megabytes (MB) and its mean average precision (mAP) value. Two approaches are provided: The first one consists on the training of the model using the Detectron2 library, while the second one focuses on the training of the model using the NVIDIA TAO Toolkit. In both cases, NVIDIA TensorRT is used to optimize the models. The results show that the best Mask R-CNN model, without optimization, has a performance of 83.45 mAP, 4 FPS, and 351 MB of size, which, after the TensorRT optimization, achieved 83.17 mAP, 25.46 FPS, and only 48.2 MB of size. It represents a suitable model for implementation in real-time systems.
目前,人工智能(AI),特别是计算机视觉(CV)在农业中有许多应用。在这一领域,草莓的生产和消费近年来都有了很大的增长,这使得满足日益增长的需求成为生产者必须面对的挑战。然而,种植这种水果的主要问题之一是成本高,采摘时间长。作为回应,自动收获已经成为解决这一困难的一种选择,水果实例分割在这些类型的系统中起着至关重要的作用。水果分割与作物中单个水果的识别和分离有关,允许更有效和准确的收获过程。尽管深度学习(DL)技术已经显示出这种活动的潜力,但模型的复杂性导致它们在实时系统中实现困难。出于这个原因,一个能够充分实时执行,同时又具有良好精度的模型是非常有趣的。基于这一动机,本文提出了一种高效的Mask R-CNN模型来对草莓果实进行实例分割。该模型的效率是根据其每秒可以处理的帧数(FPS)、以兆字节(MB)为单位的大小和平均精度(mAP)值来评估的。提供了两种方法:第一种方法是使用Detectron2库对模型进行训练,第二种方法是使用NVIDIA TAO Toolkit对模型进行训练。在这两种情况下,都使用NVIDIA TensorRT来优化模型。结果表明,未经优化的最佳Mask R-CNN模型的性能为83.45 mAP, 4 FPS,大小为351 MB,经过TensorRT优化后的性能为83.17 mAP, 25.46 FPS,大小仅为48.2 MB。它为实时系统的实现提供了一个合适的模型。
{"title":"An efficient strawberry segmentation model based on Mask R-CNN and TensorRT","authors":"Anthony Crespo ,&nbsp;Claudia Moncada ,&nbsp;Fabricio Crespo ,&nbsp;Manuel Eugenio Morocho-Cayamcela","doi":"10.1016/j.aiia.2025.01.008","DOIUrl":"10.1016/j.aiia.2025.01.008","url":null,"abstract":"<div><div>Currently, artificial intelligence (AI), particularly computer vision (CV), has numerous applications in agriculture. In this field, the production and consumption of strawberries have experienced great growth in recent years, which makes meeting the growing demand a challenge that producers must face. However, one of the main problems regarding the cultivation of this fruit is the high cost and long picking times. In response, automatic harvesting has surged as an option to address this difficulty, and fruit instance segmentation plays a crucial role in these types of systems. Fruit segmentation is related to the identification and separation of individual fruits within a crop, allowing a more efficient and accurate harvesting process. Although deep learning (DL) techniques have shown potential for this activity, the complexity of the models leads to difficulty in their implementation in real-time systems. For this reason, a model capable of performing adequately in real-time, while also having good precision is of great interest. With this motivation, this work presents a efficient Mask R-CNN model to perform instance segmentation in strawberry fruits. The efficiency of the model is assessed considering the amount of frames per second (FPS) it can process, its size in megabytes (MB) and its mean average precision (mAP) value. Two approaches are provided: The first one consists on the training of the model using the Detectron2 library, while the second one focuses on the training of the model using the NVIDIA TAO Toolkit. In both cases, NVIDIA TensorRT is used to optimize the models. The results show that the best Mask R-CNN model, without optimization, has a performance of 83.45 mAP, 4 FPS, and 351 MB of size, which, after the TensorRT optimization, achieved 83.17 mAP, 25.46 FPS, and only 48.2 MB of size. It represents a suitable model for implementation in real-time systems.</div></div>","PeriodicalId":52814,"journal":{"name":"Artificial Intelligence in Agriculture","volume":"15 2","pages":"Pages 327-337"},"PeriodicalIF":8.2,"publicationDate":"2025-02-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143704966","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
PWM offline variable application based on UAV remote sensing 3D prescription map 基于无人机遥感三维处方图的PWM离线变量应用
IF 8.2 Q1 AGRICULTURE, MULTIDISCIPLINARY Pub Date : 2025-01-27 DOI: 10.1016/j.aiia.2025.01.011
Leng Han , Zhichong Wang , Miao He , Yajia Liu , Xiongkui He
Precision application in orchards enhancing deposition uniformity and environmental sustainability by accurately matching nozzle output with canopy parameters. This study provides a pipeline for creating 3D prescription maps using a UAV and performing offline variable application. It also evaluates the accuracy of ground altitude measurements at various flight heights. At a flight height of 30 m, with a three-dimensional reconstruction method without phase-control points, the root mean square error (RMSE) for ground altitude measurement was 0.214 m and the mean absolute error (MAE) was 0.211 m; for the canopy area, these values were 0.591 m and 0.541 m, respectively. As flight height increased, the accuracy of altitude measurements declined and tended to be underestimated. Moreover, during offline variable spraying, the shape of the spray area influenced deposition accuracy, with collision detection area of a line segment achieving greater precision than conical ones. Field tests showed that the offline variable application method reduced pesticide usage by 32.43 % and enhanced spray uniformity. This newly developed process does not require costly sensors on each sprayer and has potential for field applications.
精确应用于果园,通过精确匹配喷嘴输出与冠层参数,提高沉积均匀性和环境可持续性。本研究提供了一个使用无人机创建3D处方地图并执行离线变量应用的管道。它还评估了不同飞行高度下地面高度测量的准确性。在飞行高度为30 m时,采用无相位控制点的三维重建方法,地面高度测量的均方根误差(RMSE)为0.214 m,平均绝对误差(MAE)为0.211 m;冠层面积分别为0.591 m和0.541 m。随着飞行高度的增加,高度测量的精度下降,往往被低估。此外,在离线变量喷涂过程中,喷涂区域的形状影响沉积精度,线段的碰撞检测区域比锥形的碰撞检测区域精度更高。田间试验结果表明,采用离线可变施药方法可减少32.43%的农药用量,提高喷雾均匀性。这种新开发的工艺不需要在每个喷雾器上安装昂贵的传感器,具有现场应用的潜力。
{"title":"PWM offline variable application based on UAV remote sensing 3D prescription map","authors":"Leng Han ,&nbsp;Zhichong Wang ,&nbsp;Miao He ,&nbsp;Yajia Liu ,&nbsp;Xiongkui He","doi":"10.1016/j.aiia.2025.01.011","DOIUrl":"10.1016/j.aiia.2025.01.011","url":null,"abstract":"<div><div>Precision application in orchards enhancing deposition uniformity and environmental sustainability by accurately matching nozzle output with canopy parameters. This study provides a pipeline for creating 3D prescription maps using a UAV and performing offline variable application. It also evaluates the accuracy of ground altitude measurements at various flight heights. At a flight height of 30 m, with a three-dimensional reconstruction method without phase-control points, the root mean square error (RMSE) for ground altitude measurement was 0.214 m and the mean absolute error (MAE) was 0.211 m; for the canopy area, these values were 0.591 m and 0.541 m, respectively. As flight height increased, the accuracy of altitude measurements declined and tended to be underestimated. Moreover, during offline variable spraying, the shape of the spray area influenced deposition accuracy, with collision detection area of a line segment achieving greater precision than conical ones. Field tests showed that the offline variable application method reduced pesticide usage by 32.43 % and enhanced spray uniformity. This newly developed process does not require costly sensors on each sprayer and has potential for field applications.</div></div>","PeriodicalId":52814,"journal":{"name":"Artificial Intelligence in Agriculture","volume":"15 3","pages":"Pages 496-507"},"PeriodicalIF":8.2,"publicationDate":"2025-01-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144106612","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Efficient one-stage detection of shrimp larvae in complex aquaculture scenarios 复杂养殖环境下对虾幼虫的高效一期检测
IF 8.2 Q1 AGRICULTURE, MULTIDISCIPLINARY Pub Date : 2025-01-27 DOI: 10.1016/j.aiia.2025.01.009
Guoxu Zhang , Tianyi Liao , Yingyi Chen , Ping Zhong , Zhencai Shen , Daoliang Li
The swift evolution of deep learning has greatly benefited the field of intensive aquaculture. Specifically, deep learning-based shrimp larvae detection has offered important technical assistance for counting shrimp larvae and recognizing abnormal behaviors. Firstly, the transparent bodies and small sizes of shrimp larvae, combined with complex scenarios due to variations in light intensity and water turbidity, make it challenging for current detection methods to achieve high accuracy. Secondly, deep learning-based object detection demands substantial computing power and storage space, which restricts its application on edge devices. This paper proposes an efficient one-stage shrimp larvae detection method, FAMDet, specifically designed for complex scenarios in intensive aquaculture. Firstly, different from the ordinary detection methods, it exploits an efficient FasterNet backbone, constructed with partial convolution, to extract effective multi-scale shrimp larvae features. Meanwhile, we construct an adaptively bi-directional fusion neck to integrate high-level semantic information and low-level detail information of shrimp larvae in a matter that sufficiently merges features and further mitigates noise interference. Finally, a decoupled detection head equipped with MPDIoU is used for precise bounding box regression of shrimp larvae. We collected images of shrimp larvae from multiple scenarios and labeled 108,365 targets for experiments. Compared with the ordinary detection methods (Faster RCNN, SSD, RetinaNet, CenterNet, FCOS, DETR, and YOLOX_s), FAMDet has obtained considerable advantages in accuracy, speed, and complexity. Compared with the outstanding one-stage method YOLOv8s, it has improved accuracy while reducing 57 % parameters, 37 % FLOPs, 22 % inference latency per image on CPU, and 56 % storage overhead. Furthermore, FAMDet has still outperformed multiple lightweight methods (EfficientDet, RT-DETR, GhostNetV2, EfficientFormerV2, EfficientViT, and MobileNetV4). In addition, we conducted experiments on the public dataset (VOC 07 + 12) to further verify the effectiveness of FAMDet. Consequently, the proposed method can effectively alleviate the limitations faced by resource-constrained devices and achieve superior shrimp larvae detection results.
深度学习的迅速发展极大地促进了集约化水产养殖领域的发展。具体而言,基于深度学习的虾幼虫检测为虾幼虫计数和异常行为识别提供了重要的技术支持。首先,虾幼体透明、体积小,再加上光照强度和水体浑浊度变化导致的复杂情况,使得现有的检测方法难以达到较高的精度。其次,基于深度学习的目标检测需要大量的计算能力和存储空间,这限制了其在边缘设备上的应用。本文提出了一种针对集约化养殖复杂场景的高效一期对虾幼虫检测方法FAMDet。首先,与普通检测方法不同,该方法利用部分卷积构造的高效FasterNet主干提取有效的多尺度对虾幼虫特征;同时,我们构建了自适应双向融合颈,在充分融合特征的情况下,将对虾幼虫的高级语义信息和低级细节信息融合在一起,进一步减轻噪声干扰。最后,利用配备MPDIoU的解耦检测头对虾仔进行精确边界盒回归。我们收集了多种情况下的虾幼虫图像,标记了108,365个实验目标。与一般的检测方法(Faster RCNN、SSD、RetinaNet、CenterNet、FCOS、DETR、YOLOX_s)相比,FAMDet在精度、速度和复杂度上都有相当大的优势。与出色的单阶段方法YOLOv8s相比,它提高了精度,同时减少了57%的参数,37%的FLOPs, 22%的CPU上每个图像的推理延迟和56%的存储开销。此外,FAMDet仍然优于多种轻量级方法(EfficientDet、RT-DETR、GhostNetV2、EfficientFormerV2、EfficientViT和MobileNetV4)。此外,我们在公共数据集(VOC 07 + 12)上进行了实验,进一步验证了FAMDet的有效性。因此,该方法可以有效缓解资源受限设备所面临的局限性,获得较好的对虾幼虫检测效果。
{"title":"Efficient one-stage detection of shrimp larvae in complex aquaculture scenarios","authors":"Guoxu Zhang ,&nbsp;Tianyi Liao ,&nbsp;Yingyi Chen ,&nbsp;Ping Zhong ,&nbsp;Zhencai Shen ,&nbsp;Daoliang Li","doi":"10.1016/j.aiia.2025.01.009","DOIUrl":"10.1016/j.aiia.2025.01.009","url":null,"abstract":"<div><div>The swift evolution of deep learning has greatly benefited the field of intensive aquaculture. Specifically, deep learning-based shrimp larvae detection has offered important technical assistance for counting shrimp larvae and recognizing abnormal behaviors. Firstly, the transparent bodies and small sizes of shrimp larvae, combined with complex scenarios due to variations in light intensity and water turbidity, make it challenging for current detection methods to achieve high accuracy. Secondly, deep learning-based object detection demands substantial computing power and storage space, which restricts its application on edge devices. This paper proposes an efficient one-stage shrimp larvae detection method, FAMDet, specifically designed for complex scenarios in intensive aquaculture. Firstly, different from the ordinary detection methods, it exploits an efficient FasterNet backbone, constructed with partial convolution, to extract effective multi-scale shrimp larvae features. Meanwhile, we construct an adaptively bi-directional fusion neck to integrate high-level semantic information and low-level detail information of shrimp larvae in a matter that sufficiently merges features and further mitigates noise interference. Finally, a decoupled detection head equipped with MPDIoU is used for precise bounding box regression of shrimp larvae. We collected images of shrimp larvae from multiple scenarios and labeled 108,365 targets for experiments. Compared with the ordinary detection methods (Faster RCNN, SSD, RetinaNet, CenterNet, FCOS, DETR, and YOLOX_s), FAMDet has obtained considerable advantages in accuracy, speed, and complexity. Compared with the outstanding one-stage method YOLOv8s, it has improved accuracy while reducing 57 % parameters, 37 % FLOPs, 22 % inference latency per image on CPU, and 56 % storage overhead. Furthermore, FAMDet has still outperformed multiple lightweight methods (EfficientDet, RT-DETR, GhostNetV2, EfficientFormerV2, EfficientViT, and MobileNetV4). In addition, we conducted experiments on the public dataset (VOC 07 + 12) to further verify the effectiveness of FAMDet. Consequently, the proposed method can effectively alleviate the limitations faced by resource-constrained devices and achieve superior shrimp larvae detection results.</div></div>","PeriodicalId":52814,"journal":{"name":"Artificial Intelligence in Agriculture","volume":"15 2","pages":"Pages 338-349"},"PeriodicalIF":8.2,"publicationDate":"2025-01-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143704748","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Automatic body condition scoring system for dairy cows in group state based on improved YOLOv5 and video analysis 基于改进YOLOv5和视频分析的奶牛群态体况自动评分系统
IF 8.2 Q1 AGRICULTURE, MULTIDISCIPLINARY Pub Date : 2025-01-27 DOI: 10.1016/j.aiia.2025.01.010
Jingwen Li , Pengbo Zeng , Shuai Yue , Zhiyang Zheng , Lifeng Qin , Huaibo Song
This study proposes an automated scoring system for cow body condition using improved YOLOv5 to assess the body condition distribution of herd cows, which significantly impacts herd productivity and feeding management. A dataset was created by capturing images of the cow's hindquarters using an image sensor at the entrance of the milking hall. This system enhances feature extraction ability by introducing dual path networks and convolutional block attention modules and improves efficiency by replacing some modules from the standard YOLOv5s with deep separable convolution to reduce parameters. Furthermore, the system employs an automatic detection and segmentation algorithm to achieve individual cow segmentation and body condition acquisition in the video. Subsequently, the system computes the body condition distribution of cows in a group state. The experimental findings demonstrate that the proposed model outperforms the original YOLOv5 network with higher accuracy and fewer computations and parameters. The precision, recall, and mean average precision of the model are 94.3 %, 92.5 %, and 91.8 %, respectively. The algorithm achieved an overall detection rate of 94.2 % for individual cow segmentation and body condition acquisition in the video, with a body condition scoring accuracy of 92.5 % among accurately detected cows and an overall body condition scoring accuracy of 87.1 % across the 10 video tests.
本研究提出了一种基于改进的YOLOv5的奶牛体况自动评分系统,以评估牛群体况分布,这对牛群生产力和饲养管理有重要影响。通过使用挤奶大厅入口处的图像传感器捕获奶牛后腿的图像,创建了一个数据集。该系统通过引入双路径网络和卷积块注意模块来增强特征提取能力,并通过深度可分卷积取代标准yolov5中的部分模块来减少参数来提高效率。此外,系统采用自动检测和分割算法,实现视频中奶牛个体的分割和身体状态的采集。随后,系统计算奶牛在群体状态下的身体状况分布。实验结果表明,该模型比原始的YOLOv5网络具有更高的精度和更少的计算量和参数。模型的精密度、召回率和平均精密度分别为94.3%、92.5%和91.8%。该算法对视频中奶牛个体分割和身体状况采集的总体检测率为94.2%,在被准确检测的奶牛中,身体状况评分准确率为92.5%,在10个视频测试中,整体身体状况评分准确率为87.1%。
{"title":"Automatic body condition scoring system for dairy cows in group state based on improved YOLOv5 and video analysis","authors":"Jingwen Li ,&nbsp;Pengbo Zeng ,&nbsp;Shuai Yue ,&nbsp;Zhiyang Zheng ,&nbsp;Lifeng Qin ,&nbsp;Huaibo Song","doi":"10.1016/j.aiia.2025.01.010","DOIUrl":"10.1016/j.aiia.2025.01.010","url":null,"abstract":"<div><div>This study proposes an automated scoring system for cow body condition using improved YOLOv5 to assess the body condition distribution of herd cows, which significantly impacts herd productivity and feeding management. A dataset was created by capturing images of the cow's hindquarters using an image sensor at the entrance of the milking hall. This system enhances feature extraction ability by introducing dual path networks and convolutional block attention modules and improves efficiency by replacing some modules from the standard YOLOv5s with deep separable convolution to reduce parameters. Furthermore, the system employs an automatic detection and segmentation algorithm to achieve individual cow segmentation and body condition acquisition in the video. Subsequently, the system computes the body condition distribution of cows in a group state. The experimental findings demonstrate that the proposed model outperforms the original YOLOv5 network with higher accuracy and fewer computations and parameters. The precision, recall, and mean average precision of the model are 94.3 %, 92.5 %, and 91.8 %, respectively. The algorithm achieved an overall detection rate of 94.2 % for individual cow segmentation and body condition acquisition in the video, with a body condition scoring accuracy of 92.5 % among accurately detected cows and an overall body condition scoring accuracy of 87.1 % across the 10 video tests.</div></div>","PeriodicalId":52814,"journal":{"name":"Artificial Intelligence in Agriculture","volume":"15 2","pages":"Pages 350-362"},"PeriodicalIF":8.2,"publicationDate":"2025-01-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143705546","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Identifying key factors influencing maize stalk lodging resistance through wind tunnel simulations with machine learning algorithms 利用机器学习算法模拟风洞,识别影响玉米茎秆抗倒伏的关键因素
IF 8.2 Q1 AGRICULTURE, MULTIDISCIPLINARY Pub Date : 2025-01-13 DOI: 10.1016/j.aiia.2025.01.007
Guanmin Huang, Ying Zhang, Shenghao Gu, Weiliang Wen, Xianju Lu, Xinyu Guo
Climate change has intensified maize stalk lodging, severely impacting global maize production. While numerous traits influence stalk lodging resistance, their relative importance remains unclear, hindering breeding efforts. This study introduces an combining wind tunnel testing with machine learning algorithms to quantitatively evaluate stalk lodging resistance traits. Through extensive field experiments and literature review, we identified and measured 74 phenotypic traits encompassing plant morphology, biomass, and anatomical characteristics in maize plants. Correlation analysis revealed a median linear correlation coefficient of 0.497 among these traits, with 15.1 % of correlations exceeding 0.8. Principal component analysis showed that the first five components explained 90 % of the total variance, indicating significant trait interactions. Through feature engineering and gradient boosting regression, we developed a high-precision wind speed-ear displacement prediction model (R2 = 0.93) and identified 29 key traits critical for stalk lodging resistance. Sensitivity analysis revealed plant height as the most influential factor (sensitivity coefficient: −3.87), followed by traits of the 7th internode including epidermis layer thickness (0.62), pith area (−0.60), and lignin content (0.35). Our methodological framework not only provides quantitative insights into maize stalk lodging resistance mechanisms but also establishes a systematic approach for trait evaluation. The findings offer practical guidance for breeding programs focused on enhancing stalk lodging resistance and yield stability under climate change conditions, with potential applications in agronomic practice optimization and breeding strategy development.
气候变化加剧了玉米秸秆倒伏,严重影响了全球玉米生产。虽然许多性状影响茎秆抗倒伏性,但它们的相对重要性尚不清楚,这阻碍了育种工作。本文介绍了一种将风洞测试与机器学习算法相结合的方法来定量评估茎秆抗倒伏特性。通过广泛的田间实验和文献综述,我们确定并测量了玉米植株的74个表型性状,包括植物形态、生物量和解剖特征。相关分析显示,这些性状的线性相关系数中位数为0.497,其中15.1%的相关系数超过0.8。主成分分析表明,前5个分量解释了总方差的90%,表明性状间存在显著的交互作用。通过特征工程和梯度增强回归,建立了高精度的风速-穗位移预测模型(R2 = 0.93),并确定了茎秆抗倒伏的29个关键性状。敏感性分析显示,株高是影响植株生长的最大因子(敏感性系数为−3.87),其次是7节间的表皮层厚度(敏感性系数为0.62)、髓面积(敏感性系数为−0.60)和木质素含量(敏感性系数为0.35)。我们的方法框架不仅为玉米茎秆抗倒伏机制提供了定量的见解,而且为性状评价建立了系统的方法。研究结果为气候变化条件下提高茎秆抗倒伏性和产量稳定性的育种计划提供了实用指导,在优化农艺实践和制定育种策略方面具有潜在的应用价值。
{"title":"Identifying key factors influencing maize stalk lodging resistance through wind tunnel simulations with machine learning algorithms","authors":"Guanmin Huang,&nbsp;Ying Zhang,&nbsp;Shenghao Gu,&nbsp;Weiliang Wen,&nbsp;Xianju Lu,&nbsp;Xinyu Guo","doi":"10.1016/j.aiia.2025.01.007","DOIUrl":"10.1016/j.aiia.2025.01.007","url":null,"abstract":"<div><div>Climate change has intensified maize stalk lodging, severely impacting global maize production. While numerous traits influence stalk lodging resistance, their relative importance remains unclear, hindering breeding efforts. This study introduces an combining wind tunnel testing with machine learning algorithms to quantitatively evaluate stalk lodging resistance traits. Through extensive field experiments and literature review, we identified and measured 74 phenotypic traits encompassing plant morphology, biomass, and anatomical characteristics in maize plants. Correlation analysis revealed a median linear correlation coefficient of 0.497 among these traits, with 15.1 % of correlations exceeding 0.8. Principal component analysis showed that the first five components explained 90 % of the total variance, indicating significant trait interactions. Through feature engineering and gradient boosting regression, we developed a high-precision wind speed-ear displacement prediction model (R<sup>2</sup> = 0.93) and identified 29 key traits critical for stalk lodging resistance. Sensitivity analysis revealed plant height as the most influential factor (sensitivity coefficient: −3.87), followed by traits of the 7th internode including epidermis layer thickness (0.62), pith area (−0.60), and lignin content (0.35). Our methodological framework not only provides quantitative insights into maize stalk lodging resistance mechanisms but also establishes a systematic approach for trait evaluation. The findings offer practical guidance for breeding programs focused on enhancing stalk lodging resistance and yield stability under climate change conditions, with potential applications in agronomic practice optimization and breeding strategy development.</div></div>","PeriodicalId":52814,"journal":{"name":"Artificial Intelligence in Agriculture","volume":"15 2","pages":"Pages 316-326"},"PeriodicalIF":8.2,"publicationDate":"2025-01-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143704965","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Comprehensive review on 3D point cloud segmentation in plants 植物三维点云分割技术综述
IF 8.2 Q1 AGRICULTURE, MULTIDISCIPLINARY Pub Date : 2025-01-11 DOI: 10.1016/j.aiia.2025.01.006
Hongli Song , Weiliang Wen , Sheng Wu , Xinyu Guo
Segmentation of three-dimensional (3D) point clouds is fundamental in comprehending unstructured structural and morphological data. It plays a critical role in research related to plant phenomics, 3D plant modeling, and functional-structural plant modeling. Although technologies for plant point cloud segmentation (PPCS) have advanced rapidly, there has been a lack of a systematic overview of the development process. This paper presents an overview of the progress made in 3D point cloud segmentation research in plants. It starts by discussing the methods used to acquire point clouds in plants, and analyzes the impact of point cloud resolution and quality on the segmentation task. It then introduces multi-scale point cloud segmentation in plants. The paper summarizes and analyzes traditional methods for PPCS, including the global and local features. This paper discusses the progress of machine learning-based segmentation on plant point clouds through supervised, unsupervised, and integrated approaches. It also summarizes the datasets that for PPCS using deep learning-oriented methods and explains the advantages and disadvantages of deep learning-based methods for projection-based, voxel-based, and point-based approaches respectively. Finally, the development of PPCS is discussed and prospected. Deep learning methods are predicted to become dominant in the field of PPCS, and 3D point cloud segmentation would develop towards more automated with higher resolution and precision.
三维点云的分割是理解非结构化结构和形态数据的基础。它在植物表型组学、植物三维建模和植物功能结构建模等研究中发挥着至关重要的作用。尽管植物点云分割(PPCS)技术发展迅速,但一直缺乏对其发展过程的系统概述。本文综述了植物三维点云分割的研究进展。首先讨论了植物中点云的获取方法,分析了点云分辨率和质量对分割任务的影响。然后引入植物的多尺度点云分割。总结和分析了传统的PPCS方法,包括全局特征和局部特征。本文讨论了基于机器学习的植物点云分割的进展,包括有监督、无监督和集成方法。总结了基于深度学习的PPCS方法的数据集,并分别解释了基于投影、基于体素和基于点的深度学习方法的优缺点。最后,对PPCS的发展进行了讨论和展望。预计深度学习方法将在PPCS领域占据主导地位,3D点云分割将朝着更高分辨率和精度的自动化方向发展。
{"title":"Comprehensive review on 3D point cloud segmentation in plants","authors":"Hongli Song ,&nbsp;Weiliang Wen ,&nbsp;Sheng Wu ,&nbsp;Xinyu Guo","doi":"10.1016/j.aiia.2025.01.006","DOIUrl":"10.1016/j.aiia.2025.01.006","url":null,"abstract":"<div><div>Segmentation of three-dimensional (3D) point clouds is fundamental in comprehending unstructured structural and morphological data. It plays a critical role in research related to plant phenomics, 3D plant modeling, and functional-structural plant modeling. Although technologies for plant point cloud segmentation (PPCS) have advanced rapidly, there has been a lack of a systematic overview of the development process. This paper presents an overview of the progress made in 3D point cloud segmentation research in plants. It starts by discussing the methods used to acquire point clouds in plants, and analyzes the impact of point cloud resolution and quality on the segmentation task. It then introduces multi-scale point cloud segmentation in plants. The paper summarizes and analyzes traditional methods for PPCS, including the global and local features. This paper discusses the progress of machine learning-based segmentation on plant point clouds through supervised, unsupervised, and integrated approaches. It also summarizes the datasets that for PPCS using deep learning-oriented methods and explains the advantages and disadvantages of deep learning-based methods for projection-based, voxel-based, and point-based approaches respectively. Finally, the development of PPCS is discussed and prospected. Deep learning methods are predicted to become dominant in the field of PPCS, and 3D point cloud segmentation would develop towards more automated with higher resolution and precision.</div></div>","PeriodicalId":52814,"journal":{"name":"Artificial Intelligence in Agriculture","volume":"15 2","pages":"Pages 296-315"},"PeriodicalIF":8.2,"publicationDate":"2025-01-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143704964","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
High-throughput phenotyping techniques for forage: Status, bottleneck, and challenges 饲料高通量表型技术:现状、瓶颈和挑战
IF 8.2 Q1 AGRICULTURE, MULTIDISCIPLINARY Pub Date : 2025-01-10 DOI: 10.1016/j.aiia.2025.01.003
Tao Cheng , Dongyan Zhang , Gan Zhang , Tianyi Wang , Weibo Ren , Feng Yuan , Yaling Liu , Zhaoming Wang , Chunjiang Zhao
High-throughput phenotyping (HTP) technology is now a significant bottleneck in the efficient selection and breeding of superior forage genetic resources. To better understand the status of forage phenotyping research and identify key directions for development, this review summarizes advances in HTP technology for forage phenotypic analysis over the past ten years. This paper reviews the unique aspects and research priorities in forage phenotypic monitoring, highlights key remote sensing platforms, examines the applications of advanced sensing technology for quantifying phenotypic traits, explores artificial intelligence (AI) algorithms in phenotypic data integration and analysis, and assesses recent progress in phenotypic genomics. The practical applications of HTP technology in forage remain constrained by several challenges. These include establishing uniform data collection standards, designing effective algorithms to handle complex genetic and environmental interactions, deepening the cross-exploration of phenomics-genomics, solving the problem of pathological inversion of forage phenotypic growth monitoring models, and developing low-cost forage phenotypic equipment. Resolving these challenges will unlock the full potential of HTP, enabling precise identification of superior forage traits, accelerating the breeding of superior varieties, and ultimately improving forage yield.
高通量表型(HTP)技术是目前牧草优良遗传资源高效选择和育种的重要瓶颈。为了更好地了解牧草表型研究的现状和确定重点发展方向,本文对近十年来HTP技术用于牧草表型分析的进展进行了综述。本文综述了牧草表型监测的独特方面和研究重点,重点介绍了关键的遥感平台,研究了先进的传感技术在表型性状量化方面的应用,探讨了人工智能(AI)算法在表型数据集成和分析方面的应用,并评估了表型基因组学的最新进展。HTP技术在饲料中的实际应用仍然受到一些挑战的制约。其中包括建立统一的数据收集标准,设计有效的算法来处理复杂的遗传和环境相互作用,深化表型学与基因组学的交叉探索,解决饲料表型生长监测模型的病理反转问题,以及开发低成本的饲料表型设备。解决这些挑战将释放HTP的全部潜力,能够精确识别优质饲料性状,加速优良品种的育种,最终提高饲料产量。
{"title":"High-throughput phenotyping techniques for forage: Status, bottleneck, and challenges","authors":"Tao Cheng ,&nbsp;Dongyan Zhang ,&nbsp;Gan Zhang ,&nbsp;Tianyi Wang ,&nbsp;Weibo Ren ,&nbsp;Feng Yuan ,&nbsp;Yaling Liu ,&nbsp;Zhaoming Wang ,&nbsp;Chunjiang Zhao","doi":"10.1016/j.aiia.2025.01.003","DOIUrl":"10.1016/j.aiia.2025.01.003","url":null,"abstract":"<div><div>High-throughput phenotyping (HTP) technology is now a significant bottleneck in the efficient selection and breeding of superior forage genetic resources. To better understand the status of forage phenotyping research and identify key directions for development, this review summarizes advances in HTP technology for forage phenotypic analysis over the past ten years. This paper reviews the unique aspects and research priorities in forage phenotypic monitoring, highlights key remote sensing platforms, examines the applications of advanced sensing technology for quantifying phenotypic traits, explores artificial intelligence (AI) algorithms in phenotypic data integration and analysis, and assesses recent progress in phenotypic genomics. The practical applications of HTP technology in forage remain constrained by several challenges. These include establishing uniform data collection standards, designing effective algorithms to handle complex genetic and environmental interactions, deepening the cross-exploration of phenomics-genomics, solving the problem of pathological inversion of forage phenotypic growth monitoring models, and developing low-cost forage phenotypic equipment. Resolving these challenges will unlock the full potential of HTP, enabling precise identification of superior forage traits, accelerating the breeding of superior varieties, and ultimately improving forage yield.</div></div>","PeriodicalId":52814,"journal":{"name":"Artificial Intelligence in Agriculture","volume":"15 1","pages":"Pages 98-115"},"PeriodicalIF":8.2,"publicationDate":"2025-01-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143097558","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Crop-conditional semantic segmentation for efficient agricultural disease assessment 农作物条件语义分割用于农业病害的有效评估
IF 8.2 Q1 AGRICULTURE, MULTIDISCIPLINARY Pub Date : 2025-01-10 DOI: 10.1016/j.aiia.2025.01.002
Artzai Picon , Itziar Eguskiza , Pablo Galan , Laura Gomez-Zamanillo , Javier Romero , Christian Klukas , Arantza Bereciartua-Perez , Mike Scharner , Ramon Navarra-Mestre
In this study, we introduced an innovative crop-conditional semantic segmentation architecture that seamlessly incorporates contextual metadata (crop information). This is achieved by merging the contextual information at a late layer stage, allowing the method to be integrated with any semantic segmentation architecture, including novel ones. To evaluate the effectiveness of this approach, we curated a challenging dataset of over 100,000 images captured in real-field conditions using mobile phones. This dataset includes various disease stages across 21 diseases and seven crops (wheat, barley, corn, rice, rape-seed, vinegrape, and cucumber), with the added complexity of multiple diseases coexisting in a single image. We demonstrate that incorporating contextual multi-crop information significantly enhances the performance of semantic segmentation models for plant disease detection. By leveraging crop-specific metadata, our approach achieves higher accuracy and better generalization across diverse crops (F1 = 0.68, r = 0.75) compared to traditional methods (F1 = 0.24, r = 0.68). Additionally, the adoption of a semi-supervised approach based on pseudo-labeling of single diseased plants, offers significant advantages for plant disease segmentation and quantification (F1 = 0.73, r = 0.95). This method enhances the model's performance by leveraging both labeled and unlabeled data, reducing the dependency on extensive manual annotations, which are often time-consuming and costly.
The deployment of this algorithm holds the potential to revolutionize the digitization of crop protection product testing, ensuring heightened repeatability while minimizing human subjectivity. By addressing the challenges of semantic segmentation and disease quantification, we contribute to more effective and precise phenotyping, ultimately supporting better crop management and protection strategies.
在这项研究中,我们引入了一种创新的作物条件语义分割架构,该架构无缝地结合了上下文元数据(作物信息)。这是通过在后期阶段合并上下文信息来实现的,允许该方法与任何语义分割体系结构集成,包括新的。为了评估这种方法的有效性,我们策划了一个具有挑战性的数据集,其中包括使用手机在实际条件下拍摄的100,000多张图像。该数据集包括21种疾病和7种作物(小麦、大麦、玉米、水稻、油菜籽、葡萄和黄瓜)的不同疾病阶段,并且在单个图像中同时存在多种疾病的复杂性。我们证明,结合上下文多作物信息显着提高了植物病害检测的语义分割模型的性能。通过利用特定作物的元数据,与传统方法(F1 = 0.24, r = 0.68)相比,我们的方法在不同作物之间实现了更高的精度和更好的泛化(F1 = 0.68, r = 0.75)。此外,采用基于单株病株伪标记的半监督方法,对植物病害的分割和定量具有显著优势(F1 = 0.73, r = 0.95)。该方法通过利用标记和未标记的数据来增强模型的性能,减少了对大量手工注释的依赖,而手工注释通常既耗时又昂贵。该算法的部署有可能彻底改变作物保护产品测试的数字化,确保提高可重复性,同时最大限度地减少人类的主观性。通过解决语义分割和疾病量化的挑战,我们有助于更有效和精确的表型分析,最终支持更好的作物管理和保护策略。
{"title":"Crop-conditional semantic segmentation for efficient agricultural disease assessment","authors":"Artzai Picon ,&nbsp;Itziar Eguskiza ,&nbsp;Pablo Galan ,&nbsp;Laura Gomez-Zamanillo ,&nbsp;Javier Romero ,&nbsp;Christian Klukas ,&nbsp;Arantza Bereciartua-Perez ,&nbsp;Mike Scharner ,&nbsp;Ramon Navarra-Mestre","doi":"10.1016/j.aiia.2025.01.002","DOIUrl":"10.1016/j.aiia.2025.01.002","url":null,"abstract":"<div><div>In this study, we introduced an innovative crop-conditional semantic segmentation architecture that seamlessly incorporates contextual metadata (crop information). This is achieved by merging the contextual information at a late layer stage, allowing the method to be integrated with any semantic segmentation architecture, including novel ones. To evaluate the effectiveness of this approach, we curated a challenging dataset of over 100,000 images captured in real-field conditions using mobile phones. This dataset includes various disease stages across 21 diseases and seven crops (wheat, barley, corn, rice, rape-seed, vinegrape, and cucumber), with the added complexity of multiple diseases coexisting in a single image. We demonstrate that incorporating contextual multi-crop information significantly enhances the performance of semantic segmentation models for plant disease detection. By leveraging crop-specific metadata, our approach achieves higher accuracy and better generalization across diverse crops (F1 = 0.68, <em>r</em> = 0.75) compared to traditional methods (F1 = 0.24, <em>r</em> = 0.68). Additionally, the adoption of a semi-supervised approach based on pseudo-labeling of single diseased plants, offers significant advantages for plant disease segmentation and quantification (F1 = 0.73, <em>r</em> = 0.95). This method enhances the model's performance by leveraging both labeled and unlabeled data, reducing the dependency on extensive manual annotations, which are often time-consuming and costly.</div><div>The deployment of this algorithm holds the potential to revolutionize the digitization of crop protection product testing, ensuring heightened repeatability while minimizing human subjectivity. By addressing the challenges of semantic segmentation and disease quantification, we contribute to more effective and precise phenotyping, ultimately supporting better crop management and protection strategies.</div></div>","PeriodicalId":52814,"journal":{"name":"Artificial Intelligence in Agriculture","volume":"15 1","pages":"Pages 79-87"},"PeriodicalIF":8.2,"publicationDate":"2025-01-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143097561","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Knowledge-guided temperature correction method for soluble solids content detection of watermelon based on Vis/NIR spectroscopy 基于知识引导的西瓜可溶性固形物含量可见光/近红外光谱检测温度校正方法
IF 8.2 Q1 AGRICULTURE, MULTIDISCIPLINARY Pub Date : 2025-01-09 DOI: 10.1016/j.aiia.2025.01.004
Zhizhong Sun , Jie Yang , Yang Yao , Dong Hu , Yibin Ying , Junxian Guo , Lijuan Xie
Visible/near-infrared (Vis/NIR) spectroscopy technology has been extensively utilized for the determination of soluble solids content (SSC) in fruits. Nonetheless, the spectral distortion resulting from temperature variations in the sample leads to a decrease in detection accuracy. To mitigate the influence of temperature fluctuations on the accuracy of SSC detection in fruits, using watermelon as an example, this study presents a knowledge-guided temperature correction method utilizing one-dimensional convolutional neural networks (1D-CNN). This method consists of two stages: the first stage involves utilizing 1D-CNN models and gradient-weighted class activation mapping (Grad-CAM) method to acquire gradient-weighted features correlating with temperature. The second stage involves mapping these features and integrating them with the original Vis/NIR spectrum, and then train and test the partial least squares (PLS) model. This knowledge-guided method can identify wavelength bands with high temperature correlation in the Vis/NIR spectra, offering valuable guidance for spectral data processing. The performance of the PLS model constructed using the 15 °C spectrum guided by this method is superior to that of the global model, and can reduce the root mean square error of the prediction set (RMSEP) to 0.324°Brix, which is 32.5 % lower than the RMSEP of the global model (0.480°Brix). The method proposed in this study has superior temperature correction effects than slope and bias correction, piecewise direct standardization, and external parameter orthogonalization correction methods. The results indicate that the knowledge-guided temperature correction method based on deep learning can significantly enhance the detection accuracy of SSC in watermelon, providing valuable reference for the development of PLS calibration methods.
可见/近红外(Vis/NIR)光谱技术已广泛应用于水果中可溶性固形物含量的测定。尽管如此,样品中温度变化引起的光谱失真导致检测精度降低。为了减轻温度波动对水果中SSC检测精度的影响,本研究以西瓜为例,提出了一种基于一维卷积神经网络(1D-CNN)的知识引导温度校正方法。该方法分为两个阶段:第一阶段利用1D-CNN模型和梯度加权类激活映射(gradient-weighted class activation mapping, Grad-CAM)方法获取与温度相关的梯度加权特征。第二阶段涉及映射这些特征并将它们与原始的Vis/NIR光谱进行整合,然后训练和测试偏最小二乘(PLS)模型。这种知识引导方法可以识别出可见光/近红外光谱中具有高温相关的波段,为光谱数据处理提供了有价值的指导。利用该方法指导的15°C光谱构建的PLS模型的性能优于全局模型,预测集的均方根误差(RMSEP)降至0.324°Brix,比全局模型的RMSEP(0.480°Brix)低32.5%。该方法的温度校正效果优于斜率和偏置校正、分段直接标准化和外部参数正交化校正方法。结果表明,基于深度学习的知识引导温度校正方法可以显著提高西瓜中SSC的检测精度,为PLS校正方法的开发提供了有价值的参考。
{"title":"Knowledge-guided temperature correction method for soluble solids content detection of watermelon based on Vis/NIR spectroscopy","authors":"Zhizhong Sun ,&nbsp;Jie Yang ,&nbsp;Yang Yao ,&nbsp;Dong Hu ,&nbsp;Yibin Ying ,&nbsp;Junxian Guo ,&nbsp;Lijuan Xie","doi":"10.1016/j.aiia.2025.01.004","DOIUrl":"10.1016/j.aiia.2025.01.004","url":null,"abstract":"<div><div>Visible/near-infrared (Vis/NIR) spectroscopy technology has been extensively utilized for the determination of soluble solids content (SSC) in fruits. Nonetheless, the spectral distortion resulting from temperature variations in the sample leads to a decrease in detection accuracy. To mitigate the influence of temperature fluctuations on the accuracy of SSC detection in fruits, using watermelon as an example, this study presents a knowledge-guided temperature correction method utilizing one-dimensional convolutional neural networks (1D-CNN). This method consists of two stages: the first stage involves utilizing 1D-CNN models and gradient-weighted class activation mapping (Grad-CAM) method to acquire gradient-weighted features correlating with temperature. The second stage involves mapping these features and integrating them with the original Vis/NIR spectrum, and then train and test the partial least squares (PLS) model. This knowledge-guided method can identify wavelength bands with high temperature correlation in the Vis/NIR spectra, offering valuable guidance for spectral data processing. The performance of the PLS model constructed using the 15 °C spectrum guided by this method is superior to that of the global model, and can reduce the root mean square error of the prediction set (RMSEP) to 0.324°Brix, which is 32.5 % lower than the RMSEP of the global model (0.480°Brix). The method proposed in this study has superior temperature correction effects than slope and bias correction, piecewise direct standardization, and external parameter orthogonalization correction methods. The results indicate that the knowledge-guided temperature correction method based on deep learning can significantly enhance the detection accuracy of SSC in watermelon, providing valuable reference for the development of PLS calibration methods.</div></div>","PeriodicalId":52814,"journal":{"name":"Artificial Intelligence in Agriculture","volume":"15 1","pages":"Pages 88-97"},"PeriodicalIF":8.2,"publicationDate":"2025-01-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143097560","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Artificial Intelligence in Agriculture
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1