首页 > 最新文献

Computers and Electronics in Agriculture最新文献

英文 中文
A novel framework for dynamic and quantitative mapping of damage severity due to compound Drought–Heatwave impacts on tea Plantations, integrating Sentinel-2 and UAV images
IF 7.7 1区 农林科学 Q1 AGRICULTURE, MULTIDISCIPLINARY Pub Date : 2024-11-28 DOI: 10.1016/j.compag.2024.109688
Ran Huang , Yuanjun Xiao , Shengcheng Li , Jianing Li , Wei Weng , Qi Shao , Jingcheng Zhang , Yao Zhang , Lingbo Yang , Chao Huang , Weiwei Sun , Weiwei Liu , Hongwei Jin , Jingfeng Huang
In 2022, China experienced a historically rare compound drought–heatwave (CDH) event, which had more severe impacts on vegetation compared with individual extreme events. However, quantitatively mapping the damage severity of CDH on tea tree using satellite data remains a significant challenge. Here we proposed a novel framework for dynamic and quantitative mapping of tea trees damage severity caused by CDH in 2022 using Sentinel-2 and Unmanned Aerial Vehicle (UAV) data. The Extreme Gradient Boosting (XGBoost) was selected as the optimal machine learning algorithm to extract tea plantations using Sentinel-2 data from XGBoost, Random Forest (RF), Logistic regression (LR), and Naive Bayes. The User’s Accuracy and Producer’s Accuracy for the extraction of tea plantations are 92.20 % and 93.51 %, respectively. UAV images with 2.5 cm spatial resolution were utilized to detect the tea trees damaged caused by the CDH in 2022. A new index, named the CDH damage severity index (CDH_DSI), was proposed to quantitatively evaluate the damage severity of CDH on tea trees at pixel level, with a spatial resolution of 10 m x 10 m. Based on the results of tea plantations and damaged tea trees detection, UAV-derived CDH_DSI was calculated and used as ground truth data. Then, The XGBoost was selected as the optimal CDH_DSI prediction model from XGBoost, RF, and LR with the Sentnel-2 derived vegetation indices and spectral reflectance as predictors. The coefficient of determination was 0.81 and root mean squared error was 7.61 %. Finally, dynamic and quantitative CDH_DSI maps were generated with the optimal CDH_DSI prediction model. The results show that 50 percent of tea plantations in Wuyi were damaged by the prolonged CDH event in 2022. These results can be attributed to precipitation deficits and heatwaves. Given that more severe CDH events are projected for the future, quantifying their impacts can provide decision-making support for disaster mitigation and prevention.
{"title":"A novel framework for dynamic and quantitative mapping of damage severity due to compound Drought–Heatwave impacts on tea Plantations, integrating Sentinel-2 and UAV images","authors":"Ran Huang ,&nbsp;Yuanjun Xiao ,&nbsp;Shengcheng Li ,&nbsp;Jianing Li ,&nbsp;Wei Weng ,&nbsp;Qi Shao ,&nbsp;Jingcheng Zhang ,&nbsp;Yao Zhang ,&nbsp;Lingbo Yang ,&nbsp;Chao Huang ,&nbsp;Weiwei Sun ,&nbsp;Weiwei Liu ,&nbsp;Hongwei Jin ,&nbsp;Jingfeng Huang","doi":"10.1016/j.compag.2024.109688","DOIUrl":"10.1016/j.compag.2024.109688","url":null,"abstract":"<div><div>In 2022, China experienced a historically rare compound drought–heatwave (CDH) event, which had more severe impacts on vegetation compared with individual extreme events. However, quantitatively mapping the damage severity of CDH on tea tree using satellite data remains a significant challenge. Here we proposed a novel framework for dynamic and quantitative mapping of tea trees damage severity caused by CDH in 2022 using Sentinel-2 and Unmanned Aerial Vehicle (UAV) data. The Extreme Gradient Boosting (XGBoost) was selected as the optimal machine learning algorithm to extract tea plantations using Sentinel-2 data from XGBoost, Random Forest (RF), Logistic regression (LR), and Naive Bayes. The User’s Accuracy and Producer’s Accuracy for the extraction of tea plantations are 92.20 % and 93.51 %, respectively. UAV images with 2.5 cm spatial resolution were utilized to detect the tea trees damaged caused by the CDH in 2022. A new index, named the CDH damage severity index (CDH_DSI), was proposed to quantitatively evaluate the damage severity of CDH on tea trees at pixel level, with a spatial resolution of 10 m x 10 m. Based on the results of tea plantations and damaged tea trees detection, UAV-derived CDH_DSI was calculated and used as ground truth data. Then, The XGBoost was selected as the optimal CDH_DSI prediction model from XGBoost, RF, and LR with the Sentnel-2 derived vegetation indices and spectral reflectance as predictors. The coefficient of determination was 0.81 and root mean squared error was 7.61 %. Finally, dynamic and quantitative CDH_DSI maps were generated with the optimal CDH_DSI prediction model. The results show that 50 percent of tea plantations in Wuyi were damaged by the prolonged CDH event in 2022. These results can be attributed to precipitation deficits and heatwaves. Given that more severe CDH events are projected for the future, quantifying their impacts can provide decision-making support for disaster mitigation and prevention.</div></div>","PeriodicalId":50627,"journal":{"name":"Computers and Electronics in Agriculture","volume":"228 ","pages":"Article 109688"},"PeriodicalIF":7.7,"publicationDate":"2024-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142746255","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Unmanned aerial system and machine learning driven Digital-Twin framework for in-season cotton growth forecasting
IF 7.7 1区 农林科学 Q1 AGRICULTURE, MULTIDISCIPLINARY Pub Date : 2024-11-28 DOI: 10.1016/j.compag.2024.109589
Pankaj Pal , Juan Landivar-Bowles , Jose Landivar-Scott , Nick Duffield , Kevin Nowka , Jinha Jung , Anjin Chang , Kiju Lee , Lei Zhao , Mahendra Bhandari
In the past decade, Unmanned Aerial Systems (UAS) have made a significant impact on various sectors, including precision agriculture, by enabling remote monitoring of crop growth and development. Monitoring and managing crops effectively throughout the growing season are crucial for optimizing crop yield. The integration of UAS-monitored data and machine learning has greatly advanced crop production management, resulting in improvements in key areas such as irrigation scheduling, crop termination analysis, and predicting yield. This study presents the development of a Digital Twin (DT) for cotton crops using UAS captured RGB data. The primary objective of this DT is to forecast various cotton crop features during the growing season, including Canopy Cover (CC), Canopy Height (CH), Canopy Volume (CV), and Excess Greenness (EXG). Predictive analytics as part of DT development employs machine learning regression to extract crop feature growth patterns from UAS data collected from 2020 to 2023. During the current season, real-time UAS data and historical growth patterns are combined to generate growth patterns using a novel hybrid model generation strategy for forecasting. Comparisons of the DT-based forecasts to actual data demonstrated low RMSE for CC, CH, CV, and EXG. The proposed DT framework, which accurately forecasts cotton crop features up to 30 days into the future starting 80 days after sowing, was found to outperform existing forecasting methods. Notably, the RRMSE for CC, CH, CV, and EXG was measured to be 9, 13, 14, and 18 percent, respectively. Furthermore, the potential applications of forecasted data in biomass estimation and yield prediction are highlighted, emphasizing their significance in optimizing agricultural practices.
{"title":"Unmanned aerial system and machine learning driven Digital-Twin framework for in-season cotton growth forecasting","authors":"Pankaj Pal ,&nbsp;Juan Landivar-Bowles ,&nbsp;Jose Landivar-Scott ,&nbsp;Nick Duffield ,&nbsp;Kevin Nowka ,&nbsp;Jinha Jung ,&nbsp;Anjin Chang ,&nbsp;Kiju Lee ,&nbsp;Lei Zhao ,&nbsp;Mahendra Bhandari","doi":"10.1016/j.compag.2024.109589","DOIUrl":"10.1016/j.compag.2024.109589","url":null,"abstract":"<div><div>In the past decade, Unmanned Aerial Systems (UAS) have made a significant impact on various sectors, including precision agriculture, by enabling remote monitoring of crop growth and development. Monitoring and managing crops effectively throughout the growing season are crucial for optimizing crop yield. The integration of UAS-monitored data and machine learning has greatly advanced crop production management, resulting in improvements in key areas such as irrigation scheduling, crop termination analysis, and predicting yield. This study presents the development of a Digital Twin (DT) for cotton crops using UAS captured RGB data. The primary objective of this DT is to forecast various cotton crop features during the growing season, including Canopy Cover (CC), Canopy Height (CH), Canopy Volume (CV), and Excess Greenness (EXG). Predictive analytics as part of DT development employs machine learning regression to extract crop feature growth patterns from UAS data collected from 2020 to 2023. During the current season, real-time UAS data and historical growth patterns are combined to generate growth patterns using a novel hybrid model generation strategy for forecasting. Comparisons of the DT-based forecasts to actual data demonstrated low RMSE for CC, CH, CV, and EXG. The proposed DT framework, which accurately forecasts cotton crop features up to 30 days into the future starting 80 days after sowing, was found to outperform existing forecasting methods. Notably, the RRMSE for CC, CH, CV, and EXG was measured to be 9, 13, 14, and 18 percent, respectively. Furthermore, the potential applications of forecasted data in biomass estimation and yield prediction are highlighted, emphasizing their significance in optimizing agricultural practices.</div></div>","PeriodicalId":50627,"journal":{"name":"Computers and Electronics in Agriculture","volume":"228 ","pages":"Article 109589"},"PeriodicalIF":7.7,"publicationDate":"2024-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142746253","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Enhancing pollinator conservation: Monitoring of bees through object recognition
IF 7.7 1区 农林科学 Q1 AGRICULTURE, MULTIDISCIPLINARY Pub Date : 2024-11-28 DOI: 10.1016/j.compag.2024.109665
Ajay John Alex , Chloe M. Barnes , Pedro Machado , Isibor Ihianle , Gábor Markó , Martin Bencsik , Jordan J. Bird
In an era of rapid climate change and its adverse effects on food production, technological intervention to monitor pollinator conservation is of paramount importance for environmental monitoring and conservation for global food security. The survival of the human species depends on the conservation of pollinators. This article explores the use of Computer Vision and Object Recognition to autonomously track and report bee behaviour from images. A novel dataset of 9664 images containing bees is extracted from video streams and annotated with bounding boxes. With training, validation and testing sets (6722, 1915, and 997 images, respectively), the results of the COCO-based YOLO model fine-tuning approaches show that YOLOv5 m is the most effective approach in terms of recognition accuracy. However, YOLOv5s was shown to be the most optimal for real-time bee detection with an average processing and inference time of 5.1 ms per video frame at the cost of slightly lower ability. The trained model is then packaged within an explainable AI interface, which converts detection events into timestamped reports and charts, with the aim of facilitating use by non-technical users such as expert stakeholders from the apiculture industry towards informing responsible consumption and production.
{"title":"Enhancing pollinator conservation: Monitoring of bees through object recognition","authors":"Ajay John Alex ,&nbsp;Chloe M. Barnes ,&nbsp;Pedro Machado ,&nbsp;Isibor Ihianle ,&nbsp;Gábor Markó ,&nbsp;Martin Bencsik ,&nbsp;Jordan J. Bird","doi":"10.1016/j.compag.2024.109665","DOIUrl":"10.1016/j.compag.2024.109665","url":null,"abstract":"<div><div>In an era of rapid climate change and its adverse effects on food production, technological intervention to monitor pollinator conservation is of paramount importance for environmental monitoring and conservation for global food security. The survival of the human species depends on the conservation of pollinators. This article explores the use of Computer Vision and Object Recognition to autonomously track and report bee behaviour from images. A novel dataset of 9664 images containing bees is extracted from video streams and annotated with bounding boxes. With training, validation and testing sets (6722, 1915, and 997 images, respectively), the results of the COCO-based YOLO model fine-tuning approaches show that YOLOv5 m is the most effective approach in terms of recognition accuracy. However, YOLOv5s was shown to be the most optimal for real-time bee detection with an average processing and inference time of 5.1 ms per video frame at the cost of slightly lower ability. The trained model is then packaged within an explainable AI interface, which converts detection events into timestamped reports and charts, with the aim of facilitating use by non-technical users such as expert stakeholders from the apiculture industry towards informing responsible consumption and production.</div></div>","PeriodicalId":50627,"journal":{"name":"Computers and Electronics in Agriculture","volume":"228 ","pages":"Article 109665"},"PeriodicalIF":7.7,"publicationDate":"2024-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142746256","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
High-throughput 3D shape completion of potato tubers on a harvester
IF 7.7 1区 农林科学 Q1 AGRICULTURE, MULTIDISCIPLINARY Pub Date : 2024-11-28 DOI: 10.1016/j.compag.2024.109673
Pieter M. Blok , Federico Magistri , Cyrill Stachniss , Haozhou Wang , James Burridge , Wei Guo
Potato yield is an important metric for farmers to further optimize their cultivation practices. Potato yield can be estimated on a harvester using an RGB-D camera that can estimate the three-dimensional (3D) volume of individual potato tubers. A challenge, however, is that the 3D shape derived from RGB-D images is only partially completed, underestimating the actual volume. To address this issue, we developed a 3D shape completion network, called CoRe++, which can complete the 3D shape from RGB-D images. CoRe++ is a deep learning network that consists of a convolutional encoder and a decoder. The encoder compresses RGB-D images into latent vectors that are used by the decoder to complete the 3D shape using the deep signed distance field network (DeepSDF). To evaluate our CoRe++ network, we collected partial and complete 3D point clouds of 339 potato tubers on an operational harvester in Japan. On the 1425 RGB-D images in the test set (representing 51 unique potato tubers), our network achieved a completion accuracy of 2.8 mm on average. For volumetric estimation, the root mean squared error (RMSE) was 22.6 ml, and this was better than the RMSE of the linear regression (31.1 ml) and the base model (36.9 ml). We found that the RMSE can be further reduced to 18.2 ml when performing the 3D shape completion in the center of the RGB-D image. With an average 3D shape completion time of 10 ms per tuber, we can conclude that CoRe++ is both fast and accurate enough to be implemented on an operational harvester for high-throughput potato yield estimation. CoRe++’s high-throughput and accurate processing allows it to be applied to other tuber, fruit and vegetable crops, thereby enabling versatile, accurate and real-time yield monitoring in precision agriculture. Our code, network weights and dataset are publicly available at https://github.com/UTokyo-FieldPhenomics-Lab/corepp.git.
{"title":"High-throughput 3D shape completion of potato tubers on a harvester","authors":"Pieter M. Blok ,&nbsp;Federico Magistri ,&nbsp;Cyrill Stachniss ,&nbsp;Haozhou Wang ,&nbsp;James Burridge ,&nbsp;Wei Guo","doi":"10.1016/j.compag.2024.109673","DOIUrl":"10.1016/j.compag.2024.109673","url":null,"abstract":"<div><div>Potato yield is an important metric for farmers to further optimize their cultivation practices. Potato yield can be estimated on a harvester using an RGB-D camera that can estimate the three-dimensional (3D) volume of individual potato tubers. A challenge, however, is that the 3D shape derived from RGB-D images is only partially completed, underestimating the actual volume. To address this issue, we developed a 3D shape completion network, called CoRe++, which can complete the 3D shape from RGB-D images. CoRe++ is a deep learning network that consists of a convolutional encoder and a decoder. The encoder compresses RGB-D images into latent vectors that are used by the decoder to complete the 3D shape using the deep signed distance field network (DeepSDF). To evaluate our CoRe++ network, we collected partial and complete 3D point clouds of 339 potato tubers on an operational harvester in Japan. On the 1425 RGB-D images in the test set (representing 51 unique potato tubers), our network achieved a completion accuracy of 2.8 mm on average. For volumetric estimation, the root mean squared error (RMSE) was 22.6 ml, and this was better than the RMSE of the linear regression (31.1 ml) and the base model (36.9 ml). We found that the RMSE can be further reduced to 18.2 ml when performing the 3D shape completion in the center of the RGB-D image. With an average 3D shape completion time of 10 ms per tuber, we can conclude that CoRe++ is both fast and accurate enough to be implemented on an operational harvester for high-throughput potato yield estimation. CoRe++’s high-throughput and accurate processing allows it to be applied to other tuber, fruit and vegetable crops, thereby enabling versatile, accurate and real-time yield monitoring in precision agriculture. Our code, network weights and dataset are publicly available at <span><span>https://github.com/UTokyo-FieldPhenomics-Lab/corepp.git</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":50627,"journal":{"name":"Computers and Electronics in Agriculture","volume":"228 ","pages":"Article 109673"},"PeriodicalIF":7.7,"publicationDate":"2024-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142746254","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Chicken body temperature monitoring method in complex environment based on multi-source image fusion and deep learning
IF 7.7 1区 农林科学 Q1 AGRICULTURE, MULTIDISCIPLINARY Pub Date : 2024-11-28 DOI: 10.1016/j.compag.2024.109689
Pei Wang , Pengxin Wu , Chao Wang , Xiaofeng Huang , Lihong Wang , Chengsong Li , Qi Niu , Hui Li
Severe diseases in chickens present substantial risks to poultry husbandry industry. Notably, alterations in body temperature serve as critical clinical indicators of these diseases. Consequently, timely and accurate monitoring of body temperature is essential for the early detection of severe health issues in chickens. This study presents a novel method for simultaneous body temperature detection of multiple chickens in caged poultry environments. A dataset of 2896 chicken head images was developed. The YOLOv8n-mvc model was created to accurately detect chicken head positions and extracted temperature data and distance information through the fusion of RGB, thermal infrared, and depth images. The chicken head temperature was calibrated using distance information. The YOLOv8n-mvc model established in this study achieved a precision of 91.6 %, recall of 92.5 %, F1 score of 92.0 %, and [email protected] of 96.0 %. The model was successfully deployed on an edge computing device for validation tests, demonstrating its feasibility for chicken body temperature detection. This study provides a reference for developing a chicken health monitoring system based on body temperature.
{"title":"Chicken body temperature monitoring method in complex environment based on multi-source image fusion and deep learning","authors":"Pei Wang ,&nbsp;Pengxin Wu ,&nbsp;Chao Wang ,&nbsp;Xiaofeng Huang ,&nbsp;Lihong Wang ,&nbsp;Chengsong Li ,&nbsp;Qi Niu ,&nbsp;Hui Li","doi":"10.1016/j.compag.2024.109689","DOIUrl":"10.1016/j.compag.2024.109689","url":null,"abstract":"<div><div>Severe diseases in chickens present substantial risks to poultry husbandry industry. Notably, alterations in body temperature serve as critical clinical indicators of these diseases. Consequently, timely and accurate monitoring of body temperature is essential for the early detection of severe health issues in chickens. This study presents a novel method for simultaneous body temperature detection of multiple chickens in caged poultry environments. A dataset of 2896 chicken head images was developed. The YOLOv8n-mvc model was created to accurately detect chicken head positions and extracted temperature data and distance information through the fusion of RGB, thermal infrared, and depth images. The chicken head temperature was calibrated using distance information. The YOLOv8n-mvc model established in this study achieved a precision of 91.6 %, recall of 92.5 %, F1 score of 92.0 %, and [email protected] of 96.0 %. The model was successfully deployed on an edge computing device for validation tests, demonstrating its feasibility for chicken body temperature detection. This study provides a reference for developing a chicken health monitoring system based on body temperature.</div></div>","PeriodicalId":50627,"journal":{"name":"Computers and Electronics in Agriculture","volume":"228 ","pages":"Article 109689"},"PeriodicalIF":7.7,"publicationDate":"2024-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142746348","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Location of safflower filaments picking points in complex environment based on improved Yolov5 algorithm 基于改进的 Yolov5 算法的复杂环境中红花花丝采摘点的定位
IF 7.7 1区 农林科学 Q1 AGRICULTURE, MULTIDISCIPLINARY Pub Date : 2024-11-27 DOI: 10.1016/j.compag.2024.109463
Xiaorong Wang , Jianping Zhou , Yan Xu , Chao Cui , Zihe Liu , Jinrong Chen
Mechanized safflower harvesting is prone to inaccurate recognition and positioning of safflower filaments, which is influenced by complex environmental factors such as occlusion, lighting, and challenges related to small targets and small samples. To solve this problem, we improved on the Yolov5 algorithm model and developed a two-stage recognition and positioning approach named Yolov5-ABBM. A safflower dataset was established to classify safflower filaments based on their maturity levels. The Swin Transformer attention mechanism was incorporated to improve the feature-extraction capability of the algorithm model, particularly for small samples and small targets. A geometric operation algorithm based on Bbox and Mask (ABBM) was developed to enhance the positioning speed and minimize missed recognition when locating safflower-filament picking points. Experimental results show that the improved model achieved a recognition precision improvement of 5.8% and 7.9% based on Bbox and Mask, respectively, and exhibited a significant enhancement of 15.3% and 19.4% for small samples. The positioning precision reached 98.19%, with an average positioning running time of 0.018 s per frame image. The improved model demonstrated superior accuracy and positioning speed compared with other algorithm models. The results show that the improved model could accurately identify and locate safflower-filament picking points, particularly for small samples, thereby offering technical support for efficient mechanized safflower harvesting.
机械化红花收获容易出现红花花丝识别和定位不准确的问题,这受到遮挡、光照等复杂环境因素的影响,以及与小目标和小样本相关的挑战。为解决这一问题,我们对 Yolov5 算法模型进行了改进,开发了一种名为 Yolov5-ABBM 的两阶段识别和定位方法。我们建立了一个红花数据集,根据成熟度对红花花丝进行分类。为了提高算法模型的特征提取能力,特别是针对小样本和小目标的特征提取能力,研究人员采用了 Swin Transformer 注意机制。开发了一种基于 Bbox 和 Mask(ABBM)的几何运算算法,以提高定位速度,并在定位红花丝采摘点时尽量减少漏识。实验结果表明,在 Bbox 和 Mask 的基础上,改进模型的识别精度分别提高了 5.8% 和 7.9%,对于小样本的识别精度则分别显著提高了 15.3% 和 19.4%。定位精度达到 98.19%,每帧图像的平均定位时间为 0.018 秒。与其他算法模型相比,改进后的模型在精确度和定位速度方面都表现出了更高的水平。结果表明,改进后的模型能够准确识别和定位红花纤丝采摘点,尤其是小样本的采摘点,从而为高效的机械化红花收获提供了技术支持。
{"title":"Location of safflower filaments picking points in complex environment based on improved Yolov5 algorithm","authors":"Xiaorong Wang ,&nbsp;Jianping Zhou ,&nbsp;Yan Xu ,&nbsp;Chao Cui ,&nbsp;Zihe Liu ,&nbsp;Jinrong Chen","doi":"10.1016/j.compag.2024.109463","DOIUrl":"10.1016/j.compag.2024.109463","url":null,"abstract":"<div><div>Mechanized safflower harvesting is prone to inaccurate recognition and positioning of safflower filaments, which is influenced by complex environmental factors such as occlusion, lighting, and challenges related to small targets and small samples. To solve this problem, we improved on the Yolov5 algorithm model and developed a two-stage recognition and positioning approach named Yolov5-ABBM. A safflower dataset was established to classify safflower filaments based on their maturity levels. The Swin Transformer attention mechanism was incorporated to improve the feature-extraction capability of the algorithm model, particularly for small samples and small targets. A geometric operation algorithm based on Bbox and Mask (ABBM) was developed to enhance the positioning speed and minimize missed recognition when locating safflower-filament picking points. Experimental results show that the improved model achieved a recognition precision improvement of 5.8% and 7.9% based on Bbox and Mask, respectively, and exhibited a significant enhancement of 15.3% and 19.4% for small samples. The positioning precision reached 98.19%, with an average positioning running time of 0.018 s per frame image. The improved model demonstrated superior accuracy and positioning speed compared with other algorithm models. The results show that the improved model could accurately identify and locate safflower-filament picking points, particularly for small samples, thereby offering technical support for efficient mechanized safflower harvesting.</div></div>","PeriodicalId":50627,"journal":{"name":"Computers and Electronics in Agriculture","volume":"227 ","pages":"Article 109463"},"PeriodicalIF":7.7,"publicationDate":"2024-11-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142721692","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Transformer-Based hyperspectral image analysis for phenotyping drought tolerance in blueberries 基于变压器的高光谱图像分析,用于蓝莓耐旱性的表型分析
IF 7.7 1区 农林科学 Q1 AGRICULTURE, MULTIDISCIPLINARY Pub Date : 2024-11-27 DOI: 10.1016/j.compag.2024.109684
Md. Hasibur Rahman , Savannah Busby , Sushan Ru , Sajid Hanif , Alvaro Sanz-Saez , Jingyi Zheng , Tanzeel U. Rehman
Drought-induced stress significantly impacted blueberry production due to the plants’ inefficient water regulation mechanisms to maintain yield and fruit quality under drought stress. Traditional methods of manual phenotyping for drought stress are not only time-consuming but also labor-intensive. To address the need for accurate and large-scale assessment of drought tolerance, we developed a high-throughput phenotyping (HTP) system to capture hyperspectral images of blueberry plants under drought conditions. A novel transformer-based model, LWC-former was introduced to predict leaf water content (LWC) utilizing spectral reflectance from hyperspectral images obtained from the developed HTP system. The LWC-former transformed the spectral reflectance into patch representations and embedded these patches into a lower dimensional to address multicollinearity issues. These patches were then passed to the transformer encoder to learn distributed features, followed by a regression head to predict LWC. To train the model, spectral reflectance data were extracted from hyperspectral images and pre-processed using log(1/R), mean scatter correction (MSC), and mean centering (MC). The results showed that our model achieved a coefficient of determination (R2) of 0.81 on the test dataset. The performance of the proposed model was also compared with TabTransformer, DeepRWC, multilayer perceptron (MLP), partial least squares regression (PLSR), support vector regression (SVR), and random forest (RF), achieving R2 values of 0.65, 0.73, 0.71, 0.47, and 0.58, respectively. The results demonstrated that LWC-former outperformed other deep learning and statistical-based models. The high-throughput phenotyping system effectively facilitated large-scale data collection, while the LWC-former model addressed multicollinearity issues, significantly improving the prediction of LWC. These results demonstrate the potential of our approach for large-scale drought tolerance assessment in blueberries.
由于植物在干旱胁迫下维持产量和果实品质的水分调节机制效率低下,干旱诱导的胁迫对蓝莓产量产生了重大影响。传统的干旱胁迫人工表型方法不仅耗时,而且劳动密集。为了满足准确和大规模评估抗旱性的需要,我们开发了一种高通量表型(HTP)系统,用于捕捉干旱条件下蓝莓植株的高光谱图像。我们引入了一种基于变换器的新型模型 LWC-former,利用从所开发的 HTP 系统获得的高光谱图像中的光谱反射率预测叶片含水量(LWC)。LWC-former 将光谱反射率转换为斑块表示,并将这些斑块嵌入到一个较低的维度中,以解决多共线性问题。然后,将这些斑块传递给变换器编码器,以学习分布式特征,再通过回归头预测 LWC。为了训练模型,从高光谱图像中提取了光谱反射率数据,并使用对数(1/R)、均值散度校正(MSC)和均值居中(MC)进行了预处理。结果表明,我们的模型在测试数据集上的判定系数 (R2) 达到了 0.81。我们还将所提模型的性能与 TabTransformer、DeepRWC、多层感知器(MLP)、偏最小二乘回归(PLSR)、支持向量回归(SVR)和随机森林(RF)进行了比较,其 R2 值分别为 0.65、0.73、0.71、0.47 和 0.58。结果表明,LWC-former 的表现优于其他基于深度学习和统计的模型。高通量表型系统有效促进了大规模数据收集,而LWC-former模型解决了多重共线性问题,显著提高了LWC的预测能力。这些结果证明了我们的方法在蓝莓大规模耐旱性评估方面的潜力。
{"title":"Transformer-Based hyperspectral image analysis for phenotyping drought tolerance in blueberries","authors":"Md. Hasibur Rahman ,&nbsp;Savannah Busby ,&nbsp;Sushan Ru ,&nbsp;Sajid Hanif ,&nbsp;Alvaro Sanz-Saez ,&nbsp;Jingyi Zheng ,&nbsp;Tanzeel U. Rehman","doi":"10.1016/j.compag.2024.109684","DOIUrl":"10.1016/j.compag.2024.109684","url":null,"abstract":"<div><div>Drought-induced stress significantly impacted blueberry production due to the plants’ inefficient water regulation mechanisms to maintain yield and fruit quality under drought stress. Traditional methods of manual phenotyping for drought stress are not only time-consuming but also labor-intensive. To address the need for accurate and large-scale assessment of drought tolerance, we developed a high-throughput phenotyping (HTP) system to capture hyperspectral images of blueberry plants under drought conditions. A novel transformer-based model, LWC-former was introduced to predict leaf water content (LWC) utilizing spectral reflectance from hyperspectral images obtained from the developed HTP system. The LWC-former transformed the spectral reflectance into patch representations and embedded these patches into a lower dimensional to address multicollinearity issues. These patches were then passed to the transformer encoder to learn distributed features, followed by a regression head to predict LWC. To train the model, spectral reflectance data were extracted from hyperspectral images and pre-processed using log(1/R), mean scatter correction (MSC), and mean centering (MC). The results showed that our model achieved a coefficient of determination (R<sup>2</sup>) of 0.81 on the test dataset. The performance of the proposed model was also compared with TabTransformer, DeepRWC, multilayer perceptron (MLP), partial least squares regression (PLSR), support vector regression (SVR), and random forest (RF), achieving R<sup>2</sup> values of 0.65, 0.73, 0.71, 0.47, and 0.58, respectively. The results demonstrated that LWC-former outperformed other deep learning and statistical-based models. The high-throughput phenotyping system effectively facilitated large-scale data collection, while the LWC-former model addressed multicollinearity issues, significantly improving the prediction of LWC. These results demonstrate the potential of our approach for large-scale drought tolerance assessment in blueberries.</div></div>","PeriodicalId":50627,"journal":{"name":"Computers and Electronics in Agriculture","volume":"228 ","pages":"Article 109684"},"PeriodicalIF":7.7,"publicationDate":"2024-11-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142721378","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An advanced high resolution land use/land cover dataset for Iran (ILULC-2022) by focusing on agricultural areas based on remote sensing data 基于遥感数据,以农业地区为重点,为伊朗建立先进的高分辨率土地利用/土地覆被数据集(ILULC-2022)
IF 7.7 1区 农林科学 Q1 AGRICULTURE, MULTIDISCIPLINARY Pub Date : 2024-11-27 DOI: 10.1016/j.compag.2024.109677
Neamat Karimi, Sara Sheshangosht, Maryam Rashtbari, Omid Torabi, Amirhossein Sarbazvatan, Masoumeh Lari, Hossein Aminzadeh, Sina Abolhoseini, Mortaza Eftekhari
This study presents the first high-resolution Land Use/Land Cover dataset for Iran in 2022 (ILULC-2022), with a particular emphasis on the agricultural areas. This research employed a two-level Decision Tree Object-Oriented Image Analysis (OBIA-DT) model which incorporated segmentation of the study area derived from Google Earth images, and classification using multi-temporal information derived from Sentinel-2 satellite imagery. After segmentation of fine resolution images, the first level of the OBIA-DT model established based on the collected field datasets (about 52,000 field data were collected) to build a light LULC map which broadly identified agricultural land components without differentiating between irrigated and non-irrigated cultivations. The second level used multi-temporal indices derived from Sentinel-2 imagery and supplementary data layers to produce a complete LULC map wherein cropland areas was distinguished further into irrigated and rainfed lands, with four distinctive sub-classifications for irrigated lands. By employing this approach, a LULC map of all basins of Iran were classified into sixteen distinct classes, with different agricultural lands divided into two rainfed croplands (rainfed farming and agroforestry) and five irrigated lands (orchards, fall crops, spring crops, multiple crops, and fallow crops). According to the collected field data, the overall accuracy of ILULC-2022 maps exhibited a range from 85 to 97 % for basins with varying climates ranging from cold and temperate to hot and dry, respectively. Results reveal that the major irrigated crop classes had a user’s accuracy and producer’s accuracy ranging from 91 % to 96 %. Based on the findings of this study, the total area of agricultures in Iran encompasses 20.9 ± 2.1 million ha, constituting approximately 13 % of the Iran’s total land area. Within this agricultural expanse, irrigated (comprising irrigated lands and orchards) and rainfed agricultural lands are delineated as 10.2 ± 1.08 and 10.7 × ± 1.02 million ha, respectively, with most agricultural areas located in basins with moderate to humid climates. The ILULC-2022 dataset serves as a benchmark for future LULC change detection and is a valuable reference for efforts aimed at achieving sustainable development goals in Iran.
本研究提出了 2022 年伊朗的首个高分辨率土地利用/土地覆盖数据集(ILULC-2022),重点关注农业地区。本研究采用了两级决策树面向对象图像分析(OBIA-DT)模型,该模型包括对谷歌地球图像中的研究区域进行分割,并利用哨兵-2 卫星图像中的多时信息进行分类。在对精细分辨率图像进行分割后,OBIA-DT 模型的第一级基于所收集的实地数据集(共收集了约 52,000 个实地数据)建立了轻型 LULC 地图,该地图大致确定了农业用地的组成部分,但没有区分灌溉和非灌溉耕地。第二层使用从哨兵-2 图像和补充数据层中获得的多时指数,绘制出完整的土地利用、土地利用变化图,将耕地进一步区分为灌溉地和雨水灌溉地,并对灌溉地进行了四个不同的子分类。通过采用这种方法,伊朗所有流域的土地利用、土地利用变化和林业地图被划分为 16 个不同的等级,不同的农业用地被划分为两种雨水灌溉耕地(雨水灌溉农业和农林业)和五种灌溉耕地(果园、秋季作物、春季作物、多种作物和休耕作物)。根据收集到的实地数据,ILULC-2022 地图的总体准确度在从寒冷温带到炎热干旱等不同气候盆地中分别显示出 85% 到 97% 的范围。结果显示,主要灌溉作物类别的用户准确度和生产者准确度介于 91 % 到 96 % 之间。根据这项研究的结果,伊朗的农业总面积为 2090±210 万公顷,约占伊朗土地总面积的 13%。在这一农业区中,灌溉农业用地(包括灌溉地和果园)和雨水灌溉农业用地的面积分别为 1020±108 万公顷和 1070±102 万公顷,大部分农业区位于气候温和湿润的盆地。ILULC-2022 数据集是未来土地利用、土地利用变化和土地利用变化检测的基准,也是伊朗实现可持续发展目标的宝贵参考。
{"title":"An advanced high resolution land use/land cover dataset for Iran (ILULC-2022) by focusing on agricultural areas based on remote sensing data","authors":"Neamat Karimi,&nbsp;Sara Sheshangosht,&nbsp;Maryam Rashtbari,&nbsp;Omid Torabi,&nbsp;Amirhossein Sarbazvatan,&nbsp;Masoumeh Lari,&nbsp;Hossein Aminzadeh,&nbsp;Sina Abolhoseini,&nbsp;Mortaza Eftekhari","doi":"10.1016/j.compag.2024.109677","DOIUrl":"10.1016/j.compag.2024.109677","url":null,"abstract":"<div><div>This study presents the first high-resolution Land Use/Land Cover dataset for Iran in 2022 (ILULC-2022), with a particular emphasis on the agricultural areas. This research employed a two-level Decision Tree Object-Oriented Image Analysis (OBIA-DT) model which incorporated segmentation of the study area derived from Google Earth images, and classification using multi-temporal information derived from Sentinel-2 satellite imagery. After segmentation of fine resolution images, the first level of the OBIA-DT model established based on the collected field datasets (about 52,000 field data were collected) to build a light LULC map which broadly identified agricultural land components without differentiating between irrigated and non-irrigated cultivations. The second level used multi-temporal indices derived from Sentinel-2 imagery and supplementary data layers to produce a complete LULC map wherein cropland areas was distinguished further into irrigated and rainfed lands, with four distinctive sub-classifications for irrigated lands. By employing this approach, a LULC map of all basins of Iran were classified into sixteen distinct classes, with different agricultural lands divided into two rainfed croplands (rainfed farming and agroforestry) and five irrigated lands (orchards, fall crops, spring crops, multiple crops, and fallow crops). According to the collected field data, the overall accuracy of ILULC-2022 maps exhibited a range from 85 to 97 % for basins with varying climates ranging from cold and temperate to hot and dry, respectively. Results reveal that the major irrigated crop classes had a user’s accuracy and producer’s accuracy ranging from 91 % to 96 %. Based on the findings of this study, the total area of agricultures in Iran encompasses 20.9 ± 2.1 million ha, constituting approximately 13 % of the Iran’s total land area. Within this agricultural expanse, irrigated (comprising irrigated lands and orchards) and rainfed agricultural lands are delineated as 10.2 ± 1.08 and 10.7 × ± 1.02 million ha, respectively, with most agricultural areas located in basins with moderate to humid climates. The ILULC-2022 dataset serves as a benchmark for future LULC change detection and is a valuable reference for efforts aimed at achieving sustainable development goals in Iran.</div></div>","PeriodicalId":50627,"journal":{"name":"Computers and Electronics in Agriculture","volume":"228 ","pages":"Article 109677"},"PeriodicalIF":7.7,"publicationDate":"2024-11-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142721627","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Modelling methane production of dairy cows: A hierarchical Bayesian stochastic approach 奶牛甲烷生产建模:分层贝叶斯随机方法
IF 7.7 1区 农林科学 Q1 AGRICULTURE, MULTIDISCIPLINARY Pub Date : 2024-11-26 DOI: 10.1016/j.compag.2024.109683
Cécile M. Levrault , Nico W.M. Ogink , Jan Dijkstra , Peter W.G. Groot Koerkamp , Kelly Nichols , Fred A. van Eeuwijk , Carel F.W. Peeters
Monitoring methane production from individual cows is required for evaluating the success of greenhouse gas reduction strategies. However, converting non-continuous measurements of methane production into daily methane production rates (MPR) remains challenging due to the general non-linearity of the methane production curve. In this paper, we propose a Bayesian hierarchical stochastic kinetic equation approach to address this challenge, enabling the sharing of information across cows for improved modelling. We fit a non-linear curve on climate respiration chamber (CRC) data of 28 dairy cows before computing an area under the curve, thereby providing an estimate of MPR from individual cows, yielding a monitored and predicted population mean of 416.7 ± 36.2 g/d and 407.2 ± 35.0 g/d respectively. The shape parameters of this model were pooled across cows (population-level), while the scale parameter varied between individuals. This allowed for the characterization of variation in MPR within and between cows. Model fit was thoroughly investigated through posterior predictive checking, which showed that the model could reproduce this CRC data accurately. Comparison with a fully pooled model (all parameters constant across cows) was evaluated through cross-validation, where the Hierarchical Methane Rate (HMR) model performed better (difference in expected log predictive density of 1653). Concordance between the values observed in the CRC and those predicted by HMR was assessed with R2 (0.995), root mean square error (10.0 g/d), and Lin’s concordance correlation coefficient (0.961). Overall, the predictions made by the HMR model appeared to reflect individual MPR levels and variation between cows as well as the standard analytical approach taken by scientists with CRC data.
要评估温室气体减排战略的成功与否,就必须监测每头奶牛的甲烷产量。然而,由于甲烷产量曲线一般具有非线性,因此将非连续的甲烷产量测量值转换为日甲烷产量率(MPR)仍具有挑战性。在本文中,我们提出了一种贝叶斯分层随机动力学方程方法来应对这一挑战,从而实现跨奶牛的信息共享,改进建模。我们对 28 头奶牛的气候呼吸室 (CRC) 数据进行了非线性曲线拟合,然后计算曲线下的面积,从而估算出每头奶牛的甲烷产生量,得出监测和预测的群体平均甲烷产生量分别为 416.7 ± 36.2 克/天和 407.2 ± 35.0 克/天。该模型的形状参数在所有奶牛(群体水平)中集中使用,而比例参数则因个体而异。这样就可以确定奶牛内部和奶牛之间 MPR 的变化特征。通过后验预测检查对模型的拟合性进行了全面研究,结果表明该模型能够准确再现 CRC 数据。通过交叉验证评估了与完全集合模型(所有参数在不同奶牛之间保持不变)的比较,发现分层甲烷率(HMR)模型的表现更好(预期对数预测密度的差异为 1653)。用 R2(0.995)、均方根误差(10.0 克/天)和 Lin 一致性相关系数(0.961)评估了 CRC 观察值与 HMR 预测值之间的一致性。总体而言,HMR 模型所做的预测似乎反映了奶牛个体的 MPR 水平和奶牛之间的差异,也反映了科学家对 CRC 数据所采用的标准分析方法。
{"title":"Modelling methane production of dairy cows: A hierarchical Bayesian stochastic approach","authors":"Cécile M. Levrault ,&nbsp;Nico W.M. Ogink ,&nbsp;Jan Dijkstra ,&nbsp;Peter W.G. Groot Koerkamp ,&nbsp;Kelly Nichols ,&nbsp;Fred A. van Eeuwijk ,&nbsp;Carel F.W. Peeters","doi":"10.1016/j.compag.2024.109683","DOIUrl":"10.1016/j.compag.2024.109683","url":null,"abstract":"<div><div>Monitoring methane production from individual cows is required for evaluating the success of greenhouse gas reduction strategies. However, converting non-continuous measurements of methane production into daily methane production rates (MPR) remains challenging due to the general non-linearity of the methane production curve. In this paper, we propose a Bayesian hierarchical stochastic kinetic equation approach to address this challenge, enabling the sharing of information across cows for improved modelling. We fit a non-linear curve on climate respiration chamber (CRC) data of 28 dairy cows before computing an area under the curve, thereby providing an estimate of MPR from individual cows, yielding a monitored and predicted population mean of 416.7 ± 36.2 g/d and 407.2 ± 35.0 g/d respectively. The shape parameters of this model were pooled across cows (population-level), while the scale parameter varied between individuals. This allowed for the characterization of variation in MPR within and between cows. Model fit was thoroughly investigated through posterior predictive checking, which showed that the model could reproduce this CRC data accurately. Comparison with a fully pooled model (all parameters constant across cows) was evaluated through cross-validation, where the Hierarchical Methane Rate (HMR) model performed better (difference in expected log predictive density of 1653). Concordance between the values observed in the CRC and those predicted by HMR was assessed with R<sup>2</sup> (0.995), root mean square error (10.0 g/d), and Lin’s concordance correlation coefficient (0.961). Overall, the predictions made by the HMR model appeared to reflect individual MPR levels and variation between cows as well as the standard analytical approach taken by scientists with CRC data.</div></div>","PeriodicalId":50627,"journal":{"name":"Computers and Electronics in Agriculture","volume":"228 ","pages":"Article 109683"},"PeriodicalIF":7.7,"publicationDate":"2024-11-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142721631","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Development of individual models for predicting cow milk production for real-time monitoring 开发用于实时监测的奶牛产奶量预测个体模型
IF 7.7 1区 农林科学 Q1 AGRICULTURE, MULTIDISCIPLINARY Pub Date : 2024-11-26 DOI: 10.1016/j.compag.2024.109698
Jae-Woo Song , Mingyung Lee , Hyunjin Cho , Dae-Hyun Lee , Seongwon Seo , Wang-Hee Lee
Daily milk yield serves as a physiological indicator in dairy cows and is a primary target for prediction and real-time monitoring in smart livestock farming. This study attempted to develop an individual model for predicting daily milk yield and applied it to monitor the health status of dairy cows by designing a real-time monitoring algorithm. A total of 580 datasets were used for model development after data preprocessing and screening, which were subsequently used to develop the model by modifying the existing models based on nonlinear regression analysis. The developed model was then applied to short-term real-time monitoring of abnormal daily milk yields. The optimal model was able to predict the daily milk yield, with an R2 value of 0.875 and a root mean squared error of 2.192. Real-time monitoring was designed to detect abnormal daily milk yields by collectively considering a 90% confidence interval and the difference between predicted values and expected trends. This study is the first to design a monitoring algorithm for daily milk yield from dairy cows based on an individual model capable of predicting the daily milk yield. This study expects that a platform will be necessary for highly efficient smart livestock farming, enabling high productivity with minimal inputs.
日产奶量是奶牛的生理指标,也是智能畜牧业预测和实时监测的主要目标。本研究试图建立一个预测日产奶量的个体模型,并通过设计一种实时监测算法将其应用于监测奶牛的健康状况。经过数据预处理和筛选后,共有 580 个数据集被用于模型开发,随后通过基于非线性回归分析修改现有模型来开发模型。随后,将所开发的模型应用于对异常日产奶量的短期实时监测。最佳模型能够预测日产奶量,R2 值为 0.875,均方根误差为 2.192。通过综合考虑 90% 的置信区间以及预测值与预期趋势之间的差异,设计了实时监测来检测异常日产奶量。本研究首次基于能够预测日产奶量的个体模型设计了奶牛日产奶量监测算法。本研究预计,高效智能畜牧业将需要一个平台,以最小的投入实现高生产率。
{"title":"Development of individual models for predicting cow milk production for real-time monitoring","authors":"Jae-Woo Song ,&nbsp;Mingyung Lee ,&nbsp;Hyunjin Cho ,&nbsp;Dae-Hyun Lee ,&nbsp;Seongwon Seo ,&nbsp;Wang-Hee Lee","doi":"10.1016/j.compag.2024.109698","DOIUrl":"10.1016/j.compag.2024.109698","url":null,"abstract":"<div><div>Daily milk yield serves as a physiological indicator in dairy cows and is a primary target for prediction and real-time monitoring in smart livestock farming. This study attempted to develop an individual model for predicting daily milk yield and applied it to monitor the health status of dairy cows by designing a real-time monitoring algorithm. A total of 580 datasets were used for model development after data preprocessing and screening, which were subsequently used to develop the model by modifying the existing models based on nonlinear regression analysis. The developed model was then applied to short-term real-time monitoring of abnormal daily milk yields. The optimal model was able to predict the daily milk yield, with an R<sup>2</sup> value of 0.875 and a root mean squared error of 2.192. Real-time monitoring was designed to detect abnormal daily milk yields by collectively considering a 90% confidence interval and the difference between predicted values and expected trends. This study is the first to design a monitoring algorithm for daily milk yield from dairy cows based on an individual model capable of predicting the daily milk yield. This study expects that a platform will be necessary for highly efficient smart livestock farming, enabling high productivity with minimal inputs.</div></div>","PeriodicalId":50627,"journal":{"name":"Computers and Electronics in Agriculture","volume":"228 ","pages":"Article 109698"},"PeriodicalIF":7.7,"publicationDate":"2024-11-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142721377","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Computers and Electronics in Agriculture
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1