首页 > 最新文献

Computers and Electronics in Agriculture最新文献

英文 中文
Utilizing farm knowledge for indoor precision livestock farming: Time-domain adaptation of cattle face recognition
IF 7.7 1区 农林科学 Q1 AGRICULTURE, MULTIDISCIPLINARY Pub Date : 2025-03-19 DOI: 10.1016/j.compag.2025.110301
Shujie Han , Alvaro Fuentes , Jongbin Park , Sook Yoon , Jucheng Yang , Yongchae Jeong , Dong Sun Park
In real-field cattle farming environments, precise cattle recognition is imperative for effective animal husbandry practices such as monitoring individual behaviors and screening health to ensure animal welfare. Recently, data-driven deep learning models provide efficient and non-intrusive face recognition. However, their application in real-world scenarios presents significant challenges due to data domain drift over time, encompassing geometric variations in face pose, illumination fluctuations, and disruptions in the background environment. To tackle these challenges, this paper introduces a framework for cattle face recognition with innovative techniques based on farm knowledge that guides the model’s training and inference process. First, we combine temporal and pose alignment to mitigate the impact of geometric pose variations. Second, we employ illumination augmentation to adapt to varying illumination conditions, bolstering model robustness. Third, we use semantic segmentation to isolate the facial components, enhancing recognition precision and maintaining focus on facial attributes. Empirical experiments validate our approach, demonstrating its effectiveness for real-world deployment, ensuring robust performance across changing environmental conditions. Our model maintains high accuracy, underscoring its reliability in managing the complexity of real-world scenarios. In summary, this paper presents a comprehensive strategy to address domain drift challenges in cattle face recognition within extended real-world settings, equipping the model to meet the demands of genuine cattle farming contexts effectively.
{"title":"Utilizing farm knowledge for indoor precision livestock farming: Time-domain adaptation of cattle face recognition","authors":"Shujie Han ,&nbsp;Alvaro Fuentes ,&nbsp;Jongbin Park ,&nbsp;Sook Yoon ,&nbsp;Jucheng Yang ,&nbsp;Yongchae Jeong ,&nbsp;Dong Sun Park","doi":"10.1016/j.compag.2025.110301","DOIUrl":"10.1016/j.compag.2025.110301","url":null,"abstract":"<div><div>In real-field cattle farming environments, precise cattle recognition is imperative for effective animal husbandry practices such as monitoring individual behaviors and screening health to ensure animal welfare. Recently, data-driven deep learning models provide efficient and non-intrusive face recognition. However, their application in real-world scenarios presents significant challenges due to data domain drift over time, encompassing geometric variations in face pose, illumination fluctuations, and disruptions in the background environment. To tackle these challenges, this paper introduces a framework for cattle face recognition with innovative techniques based on farm knowledge that guides the model’s training and inference process. First, we combine temporal and pose alignment to mitigate the impact of geometric pose variations. Second, we employ illumination augmentation to adapt to varying illumination conditions, bolstering model robustness. Third, we use semantic segmentation to isolate the facial components, enhancing recognition precision and maintaining focus on facial attributes. Empirical experiments validate our approach, demonstrating its effectiveness for real-world deployment, ensuring robust performance across changing environmental conditions. Our model maintains high accuracy, underscoring its reliability in managing the complexity of real-world scenarios. In summary, this paper presents a comprehensive strategy to address domain drift challenges in cattle face recognition within extended real-world settings, equipping the model to meet the demands of genuine cattle farming contexts effectively.</div></div>","PeriodicalId":50627,"journal":{"name":"Computers and Electronics in Agriculture","volume":"234 ","pages":"Article 110301"},"PeriodicalIF":7.7,"publicationDate":"2025-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143644345","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A novel self-supervised method for in-field occluded apple ripeness determination
IF 7.7 1区 农林科学 Q1 AGRICULTURE, MULTIDISCIPLINARY Pub Date : 2025-03-19 DOI: 10.1016/j.compag.2025.110246
Ziang Zhao , Yulia Hicks , Xianfang Sun , Benjamin J. McGuinness , Hin S. Lim
The full view of the apples in the orchard is often obscured by leaves and trunks, making it challenging to accurately determine their ripeness, whilst it is an essential yet difficult task for apple-harvesting robots. Within this context, we propose a novel method to address two critical challenges: ripeness determination and in-field occlusion. The proposed method is trained in a self-supervised manner on a dataset consisting of less than 1% labelled images and the rest of unlabelled images. It is made up of three key parts: a reconstructor, a feature extractor, and a predictor. The reconstructor is designed to reconstruct the missing parts of occluded apples. The feature extractor is introduced to learn ripeness-related features from the vast number of unlabelled images. Unlike the previous approaches classifying the fruit ripeness into several discrete categories, the predictor uses the learned features to generate a continuous ripeness score in the range between 0.0 and 1.0, thus eliminating the need to subjectively pre-define ripeness stages and offering end-users the flexibility to make their own decisions.
Experimental results comparing our method to another method with different settings show that our method achieves the best Structural Similarity Index Measure (SSIM) of 0.75 and the second-best Peak-Signal-to-Noise Ratio (PSNR) of 25.36 for reconstructing missing apple parts, whilst using the fewest 86.3M parameters. Besides, our method outperforms 15 other self-supervised methods and even a supervised method in the ripeness score prediction, with the smallest score 0.0127 for fully unripe and the highest score 0.8933 for fully ripe apples. The results demonstrate the potential of our method to be incorporated with in-field robotic systems, enabling them to assess ripeness for selective harvesting effectively. It is helpful to monitor the overall ripeness of large orchards digitally, aid the decision-making processes and advance the goals of smart and precision agriculture.
{"title":"A novel self-supervised method for in-field occluded apple ripeness determination","authors":"Ziang Zhao ,&nbsp;Yulia Hicks ,&nbsp;Xianfang Sun ,&nbsp;Benjamin J. McGuinness ,&nbsp;Hin S. Lim","doi":"10.1016/j.compag.2025.110246","DOIUrl":"10.1016/j.compag.2025.110246","url":null,"abstract":"<div><div>The full view of the apples in the orchard is often obscured by leaves and trunks, making it challenging to accurately determine their ripeness, whilst it is an essential yet difficult task for apple-harvesting robots. Within this context, we propose a novel method to address two critical challenges: ripeness determination and in-field occlusion. The proposed method is trained in a self-supervised manner on a dataset consisting of less than 1% labelled images and the rest of unlabelled images. It is made up of three key parts: a reconstructor, a feature extractor, and a predictor. The reconstructor is designed to reconstruct the missing parts of occluded apples. The feature extractor is introduced to learn ripeness-related features from the vast number of unlabelled images. Unlike the previous approaches classifying the fruit ripeness into several discrete categories, the predictor uses the learned features to generate a continuous ripeness score in the range between 0.0 and 1.0, thus eliminating the need to subjectively pre-define ripeness stages and offering end-users the flexibility to make their own decisions.</div><div>Experimental results comparing our method to another method with different settings show that our method achieves the best Structural Similarity Index Measure (SSIM) of 0.75 and the second-best Peak-Signal-to-Noise Ratio (PSNR) of 25.36 for reconstructing missing apple parts, whilst using the fewest 86.3M parameters. Besides, our method outperforms 15 other self-supervised methods and even a supervised method in the ripeness score prediction, with the smallest score 0.0127 for fully unripe and the highest score 0.8933 for fully ripe apples. The results demonstrate the potential of our method to be incorporated with in-field robotic systems, enabling them to assess ripeness for selective harvesting effectively. It is helpful to monitor the overall ripeness of large orchards digitally, aid the decision-making processes and advance the goals of smart and precision agriculture.</div></div>","PeriodicalId":50627,"journal":{"name":"Computers and Electronics in Agriculture","volume":"234 ","pages":"Article 110246"},"PeriodicalIF":7.7,"publicationDate":"2025-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143645110","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A hybrid chromaticity-morphological machine learning model to overcome the limit of detecting newcastle disease in experimentally infected chicken within 36 h
IF 7.7 1区 农林科学 Q1 AGRICULTURE, MULTIDISCIPLINARY Pub Date : 2025-03-19 DOI: 10.1016/j.compag.2025.110248
Mohd Anif A.A. Bakar , Pin Jern Ker , Shirley G.H. Tang , Fatin Nursyaza Arman Shah , T.M. Indra Mahlia , Mohd Zafri Baharuddin , Abdul Rahman Omar
The nexus between animal and human health is crucial in upholding global health. Humans are at risk of food security due to fatal infections associated with the Newcastle disease virus (NDV), resulting in severe disease outbreaks. This work reports on the early detection of experimentally NDV-infected chickens to prevent such catastrophic events. Image processing techniques were employed to extract the chromaticity and morphological features of the chicken comb and standing posture. The changes in these features across different stages of symptom severity, indicated by the post-infection period in hours, were examined through statistical and Spearman coefficient correlation analysis. Various hybrid chromaticity-morphology machine learning (HCMML) classifier models, including Logistic Regression, Support Vector Machine (SVM) with different kernels, K-Nearest Neighbour (KNN), Decision Tree, and Artificial Neural Network (ANN), were trained using selected feature variables and different variation of datasets to detect infected chickens. The statistical analysis on individual features demonstrates the necessity of HCMML models to predict infected chicken with a reasonably high accuracy. Based on the coefficient correlation analysis, the chromaticity features demonstrate a higher correlation to the chickens with NDV infection than the morphological features. These findings highlight the importance of extracting chromaticity features in predicting infected chicken, especially at the early phase of infection. Based on the HCMML models result, SVM with Polynomial kernel achieved a test accuracy of 82·39 % with 79·00 % validation accuracy at 36 h post-infection after feature optimization and > 95·00 % test accuracy after 96 h post-infection. This work demonstrates a promising methodology in developing machine learning algorithm using hybrid chromaticity-morphological features for early detection of virus-infected chickens, contributing to the goal of a sustainable and healthier planet.
{"title":"A hybrid chromaticity-morphological machine learning model to overcome the limit of detecting newcastle disease in experimentally infected chicken within 36 h","authors":"Mohd Anif A.A. Bakar ,&nbsp;Pin Jern Ker ,&nbsp;Shirley G.H. Tang ,&nbsp;Fatin Nursyaza Arman Shah ,&nbsp;T.M. Indra Mahlia ,&nbsp;Mohd Zafri Baharuddin ,&nbsp;Abdul Rahman Omar","doi":"10.1016/j.compag.2025.110248","DOIUrl":"10.1016/j.compag.2025.110248","url":null,"abstract":"<div><div>The nexus between animal and human health is crucial in upholding global health. Humans are at risk of food security due to fatal infections associated with the Newcastle disease virus (NDV), resulting in severe disease outbreaks. This work reports on the early detection of experimentally NDV-infected chickens to prevent such catastrophic events. Image processing techniques were employed to extract the chromaticity and morphological features of the chicken comb and standing posture. The changes in these features across different stages of symptom severity, indicated by the post-infection period in hours, were examined through statistical and Spearman coefficient correlation analysis. Various hybrid chromaticity-morphology machine learning (HCMML) classifier models, including Logistic Regression, Support Vector Machine (SVM) with different kernels, K-Nearest Neighbour (KNN), Decision Tree, and Artificial Neural Network (ANN), were trained using selected feature variables and different variation of datasets to detect infected chickens. The statistical analysis on individual features demonstrates the necessity of HCMML models to predict infected chicken with a reasonably high accuracy. Based on the coefficient correlation analysis, the chromaticity features demonstrate a higher correlation to the chickens with NDV infection than the morphological features. These findings highlight the importance of extracting chromaticity features in predicting infected chicken, especially at the early phase of infection. Based on the HCMML models result, SVM with Polynomial kernel achieved a test accuracy of 82·39 % with 79·00 % validation accuracy at 36 h post-infection after feature optimization and &gt; 95·00 % test accuracy after 96 h post-infection. This work demonstrates a promising methodology in developing machine learning algorithm using hybrid chromaticity-morphological features for early detection of virus-infected chickens, contributing to the goal of a sustainable and healthier planet.</div></div>","PeriodicalId":50627,"journal":{"name":"Computers and Electronics in Agriculture","volume":"234 ","pages":"Article 110248"},"PeriodicalIF":7.7,"publicationDate":"2025-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143644344","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Autonomous navigation system in various greenhouse scenarios based on improved FAST-LIO2
IF 7.7 1区 农林科学 Q1 AGRICULTURE, MULTIDISCIPLINARY Pub Date : 2025-03-19 DOI: 10.1016/j.compag.2025.110279
Zhenyu Huang , Ningyuan Yang , Runzhou Cao , Zhongren Li , Yong He , Xuping Feng
The development of phenotypic detection robots suitable for semi-structured greenhouses is of significant importance for accelerating crop breeding, particularly in the screening of advantageous germplasm resources. However, the diversity of greenhouse structures and the limitations of GPS signals pose challenges to the autonomous navigation of robots. In this study, a system with autonomous navigation, voice interaction, and adaptive data acquisition was developed for strawberry germplasm resources. To reduce the drift of the global map on the z-axis and improve consistency, ground constraints and stable triangle descriptor (STD) loop closure detection were incorporated into the fast direct light detection and ranging inertial odometry (FAST-LIO2) framework. In addition, the improved FAST-LIO2 and Kalman filter were utilized to provide poses, achieving precise and continuous localization of the robot. To improve flexibility, the demonstrated path was utilized as the global path. Besides, the system integrated adaptive data acquisition and voice control modules, enabling the automatic collection of target plant data and variety information while enhancing human–computer interaction performance. The system achieved high-precision navigation across different scenarios, speeds, and motion states. Even in the state of lowest accuracy during row change, the standard deviation (SD) of the total deviation remained below 2.6 cm, the root mean square error (RMSE) was less than 5.9 cm, and the average deviation (AD) was below 5.3 cm. In terms of heading deviation, the SD was below 1.8°, the RMSE was less than 3.8°, and the AD was below 3.4°. Moreover, the success rate of target plant detection reached over 98 %. This system facilitated the construction of phenotypic analysis models, assisting breeders in variety management and demonstrating application potential in greenhouse phenotypic detection.
{"title":"Autonomous navigation system in various greenhouse scenarios based on improved FAST-LIO2","authors":"Zhenyu Huang ,&nbsp;Ningyuan Yang ,&nbsp;Runzhou Cao ,&nbsp;Zhongren Li ,&nbsp;Yong He ,&nbsp;Xuping Feng","doi":"10.1016/j.compag.2025.110279","DOIUrl":"10.1016/j.compag.2025.110279","url":null,"abstract":"<div><div>The development of phenotypic detection robots suitable for semi-structured greenhouses is of significant importance for accelerating crop breeding, particularly in the screening of advantageous germplasm resources. However, the diversity of greenhouse structures and the limitations of GPS signals pose challenges to the autonomous navigation of robots. In this study, a system with autonomous navigation, voice interaction, and adaptive data acquisition was developed for strawberry germplasm resources. To reduce the drift of the global map on the z-axis and improve consistency, ground constraints and stable triangle descriptor (STD) loop closure detection were incorporated into the fast direct light detection and ranging inertial odometry (FAST-LIO2) framework. In addition, the improved FAST-LIO2 and Kalman filter were utilized to provide poses, achieving precise and continuous localization of the robot. To improve flexibility, the demonstrated path was utilized as the global path. Besides, the system integrated adaptive data acquisition and voice control modules, enabling the automatic collection of target plant data and variety information while enhancing human–computer interaction performance. The system achieved high-precision navigation across different scenarios, speeds, and motion states. Even in the state of lowest accuracy during row change, the standard deviation (SD) of the total deviation remained below 2.6 cm, the root mean square error (RMSE) was less than 5.9 cm, and the average deviation (AD) was below 5.3 cm. In terms of heading deviation, the SD was below 1.8°, the RMSE was less than 3.8°, and the AD was below 3.4°. Moreover, the success rate of target plant detection reached over 98 %. This system facilitated the construction of phenotypic analysis models, assisting breeders in variety management and demonstrating application potential in greenhouse phenotypic detection.</div></div>","PeriodicalId":50627,"journal":{"name":"Computers and Electronics in Agriculture","volume":"234 ","pages":"Article 110279"},"PeriodicalIF":7.7,"publicationDate":"2025-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143645109","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Deep convolutional networks based on lightweight YOLOv8 to detect and estimate peanut losses from images in post-harvesting environments
IF 7.7 1区 农林科学 Q1 AGRICULTURE, MULTIDISCIPLINARY Pub Date : 2025-03-19 DOI: 10.1016/j.compag.2025.110282
Armando Lopes de Brito Filho , Franciele Morlin Carneiro , Vinicius dos Santos Carreira , Danilo Tedesco , Jarlyson Brunno Costa Souza , Marcelo Rodrigues Barbosa Júnior , Rouverson Pereira da Silva
Peanut losses detection is key to monitor operational quality during mechanical harvesting. Current manual assessments faces practical limitations in the field, as they tend to be exhaustive, time-consuming, and susceptible to errors, especially after long work periods. Therefore, the main objective of this study was to develop an automated image processing framework to detect, count, and estimate peanut pod losses during the harvesting operation. We proposed a robust approach encompassing different environmental conditions and training detection algorithms, specifically based on lightweight YOLOv8 architecture, with images acquired with a mobile smartphone at six different times of the day (10 a.m., 11 a.m., 1 p.m., 2 p.m., 3 p.m., and 4 p.m.). The experimental results showed that detecting two-seed peanut pods was more effective than one-seed pods, with higher precision, recall, and mAP50 values. The best results for image acquisition were between 10 a.m. and 2 p.m. The study also compared manual and automated counting methods, revealing that the best scenarios for counting achieved an R2 above 0.80. Furthermore, georeferenced maps of peanut losses revealed significant spatial variability, providing critical insights for targeted interventions. These findings demonstrate the potential to enhance mechanized harvesting efficiency and lay the groundwork for future integration into fully automated systems. By incorporating this method into harvesting machinery, real-time monitoring and accurate loss quantification can be achieved, substantially reducing the need for labor-intensive manual assessments.
{"title":"Deep convolutional networks based on lightweight YOLOv8 to detect and estimate peanut losses from images in post-harvesting environments","authors":"Armando Lopes de Brito Filho ,&nbsp;Franciele Morlin Carneiro ,&nbsp;Vinicius dos Santos Carreira ,&nbsp;Danilo Tedesco ,&nbsp;Jarlyson Brunno Costa Souza ,&nbsp;Marcelo Rodrigues Barbosa Júnior ,&nbsp;Rouverson Pereira da Silva","doi":"10.1016/j.compag.2025.110282","DOIUrl":"10.1016/j.compag.2025.110282","url":null,"abstract":"<div><div>Peanut losses detection is key to monitor operational quality during mechanical harvesting. Current manual assessments faces practical limitations in the field, as they tend to be exhaustive, time-consuming, and susceptible to errors, especially after long work periods. Therefore, the main objective of this study was to develop an automated image processing framework to detect, count, and estimate peanut pod losses during the harvesting operation. We proposed a robust approach encompassing different environmental conditions and training detection algorithms, specifically based on lightweight YOLOv8 architecture, with images acquired with a mobile smartphone at six different times of the day (10 a.m., 11 a.m., 1 p.m., 2 p.m., 3 p.m., and 4 p.m.). The experimental results showed that detecting two-seed peanut pods was more effective than one-seed pods, with higher precision, recall, and mAP50 values. The best results for image acquisition were between 10 a.m. and 2 p.m. The study also compared manual and automated counting methods, revealing that the best scenarios for counting achieved an R<sup>2</sup> above 0.80. Furthermore, georeferenced maps of peanut losses revealed significant spatial variability, providing critical insights for targeted interventions. These findings demonstrate the potential to enhance mechanized harvesting efficiency and lay the groundwork for future integration into fully automated systems. By incorporating this method into harvesting machinery, real-time monitoring and accurate loss quantification can be achieved, substantially reducing the need for labor-intensive manual assessments.</div></div>","PeriodicalId":50627,"journal":{"name":"Computers and Electronics in Agriculture","volume":"234 ","pages":"Article 110282"},"PeriodicalIF":7.7,"publicationDate":"2025-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143644333","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Non-destructive and efficient prediction of intramuscular fat in live pigs based on ultrasound images and machine learning
IF 7.7 1区 农林科学 Q1 AGRICULTURE, MULTIDISCIPLINARY Pub Date : 2025-03-19 DOI: 10.1016/j.compag.2025.110291
Jian Wu , Yingying Yang , Wen Yang , Chengwan Zha , Liming Hou , Sanqin Zhao , Wangjun Wu , Yutao Liu
Intramuscular fat (IMF) content plays an essential role in the evaluation of meat quality. To select pig breeds with different IMF content, developing a method to predict the IMF content in live pigs was greatly significant to reduce the cost and time of breeding. In the current study, real-time ultrasound images 5 cm off-midline across the third and fourth last thoracic ribs of 336 live pigs were collected using the B-model technique, and image feature parameters were extracted by computer image processing techniques. Furthermore, multiple linear regression (MLR) and two machine learning algorithms, support vector machine (SVM) and back-propagation artificial neural network (BPANN), were used to develop the prediction models of IMF content. The experimental pigs were divided into a training dataset (n = 266) for developing the prediction models and a validation dataset (n = 70) for estimating the accuracy of the models, and a test set (n = 67) for additional model performance evaluation. The results reveal that the coefficient of determination (R2) of models ranges from 0.65 to 0.80 with a root-mean-square error (RMSE) range of 0.50 %–0.65 % in the training dataset. By contrast, the correlation coefficients (R) between the predicted IMF (PIMF) and the chemically measured IMF (CIMF) range from 0.72 to 0.82 with an RMSE ≤ 0.69 % for all the models in the validation and test dataset. Moreover, the results indicate that the individual ratio of absolute difference (ADIF) within 1 % between PIMF and CIMF is > 86.57 % for all the models. In addition, classification accuracy shows that the BPANN1 model has superior classification ability in both low and high IMF content groups compared to the other two types of models in the validation dataset, but not in the test dataset. The MLR models are superior to other models in the medium IMF content group. Overall, our research demonstrates that it is feasible to predict IMF content based on ultrasound images in live pigs and provides several alternative models for accurate determination of IMF content, which could accelerate the genetic improvement of IMF content, thereby improving the pork quality in pig breeding programs.
肌内脂肪(IMF)含量在肉质评价中起着至关重要的作用。为了选择不同肌内脂肪含量的猪种,开发一种预测活猪肌内脂肪含量的方法对降低育种成本和缩短育种时间意义重大。在本研究中,使用 B 模型技术采集了 336 头活猪的第三和倒数第四胸肋骨离中线 5 厘米的实时超声图像,并通过计算机图像处理技术提取了图像特征参数。此外,还采用多元线性回归(MLR)以及支持向量机(SVM)和反向传播人工神经网络(BPANN)两种机器学习算法来建立 IMF 含量的预测模型。实验猪分为用于开发预测模型的训练数据集(n = 266)和用于估算模型准确性的验证数据集(n = 70),以及用于评估模型性能的测试集(n = 67)。结果显示,模型的判定系数(R2)在 0.65 至 0.80 之间,训练数据集的均方根误差(RMSE)在 0.50 % 至 0.65 % 之间。相比之下,在验证和测试数据集中,所有模型的预测 IMF(PIMF)和化学测量 IMF(CIMF)之间的相关系数(R)介于 0.72 到 0.82 之间,均方根误差(RMSE)≤ 0.69 %。此外,结果表明,所有模型的 PIMF 和 CIMF 之间绝对差值在 1 % 以内的个体比率(ADIF)为 86.57 %。此外,分类准确率表明,在验证数据集中,BPANN1 模型在 IMF 含量低和 IMF 含量高两组中的分类能力均优于其他两类模型,但在测试数据集中则不尽然。在中等 IMF 含量组中,MLR 模型优于其他模型。总之,我们的研究表明,根据活猪的超声波图像预测 IMF 含量是可行的,并为准确测定 IMF 含量提供了几种可供选择的模型,这可以加速 IMF 含量的遗传改良,从而提高猪育种计划中的猪肉质量。
{"title":"Non-destructive and efficient prediction of intramuscular fat in live pigs based on ultrasound images and machine learning","authors":"Jian Wu ,&nbsp;Yingying Yang ,&nbsp;Wen Yang ,&nbsp;Chengwan Zha ,&nbsp;Liming Hou ,&nbsp;Sanqin Zhao ,&nbsp;Wangjun Wu ,&nbsp;Yutao Liu","doi":"10.1016/j.compag.2025.110291","DOIUrl":"10.1016/j.compag.2025.110291","url":null,"abstract":"<div><div>Intramuscular fat (IMF) content plays an essential role in the evaluation of meat quality. To select pig breeds with different IMF content, developing a method to predict the IMF content in live pigs was greatly significant to reduce the cost and time of breeding. In the current study, real-time ultrasound images 5 cm off-midline across the third and fourth last thoracic ribs of 336 live pigs were collected using the B-model technique, and image feature parameters were extracted by computer image processing techniques. Furthermore, multiple linear regression (MLR) and two machine learning algorithms, support vector machine (SVM) and back-propagation artificial neural network (BPANN), were used to develop the prediction models of IMF content. The experimental pigs were divided into a training dataset (n = 266) for developing the prediction models and a validation dataset (n = 70) for estimating the accuracy of the models, and a test set (n = 67) for additional model performance evaluation. The results reveal that the coefficient of determination (<em>R</em><sup>2</sup>) of models ranges from 0.65 to 0.80 with a root-mean-square error (RMSE) range of 0.50 %–0.65 % in the training dataset. By contrast, the correlation coefficients (<em>R</em>) between the predicted IMF (PIMF) and the chemically measured IMF (CIMF) range from 0.72 to 0.82 with an RMSE ≤ 0.69 % for all the models in the validation and test dataset. Moreover, the results indicate that the individual ratio of absolute difference (ADIF) within 1 % between PIMF and CIMF is &gt; 86.57 % for all the models. In addition, classification accuracy shows that the BPANN1 model has superior classification ability in both low and high IMF content groups compared to the other two types of models in the validation dataset, but not in the test dataset. The MLR models are superior to other models in the medium IMF content group. Overall, our research demonstrates that it is feasible to predict IMF content based on ultrasound images in live pigs and provides several alternative models for accurate determination of IMF content, which could accelerate the genetic improvement of IMF content, thereby improving the pork quality in pig breeding programs.</div></div>","PeriodicalId":50627,"journal":{"name":"Computers and Electronics in Agriculture","volume":"234 ","pages":"Article 110291"},"PeriodicalIF":7.7,"publicationDate":"2025-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143645111","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Improving the estimation accuracy of alfalfa quality based on UAV hyperspectral imagery by using data enhancement and synergistic band selection strategies 利用数据增强和协同波段选择策略,提高基于无人机高光谱图像的紫花苜蓿质量估测精度
IF 7.7 1区 农林科学 Q1 AGRICULTURE, MULTIDISCIPLINARY Pub Date : 2025-03-18 DOI: 10.1016/j.compag.2025.110305
Shuai Fu , Jie Liu , Jinlong Gao , Qisheng Feng , Senyao Feng , Chunli Miao , Yunhao Li , Caixia Wu , Tiangang Liang
Accurate and timely assessment of alfalfa nutritional parameters is crucial for optimizing harvest management, maximizing yield, and ensuring high-quality forage in China’s Hexi Corridor, a key alfalfa-growing region. UAV-based hyperspectral remote sensing offers a nondestructive and efficient method for monitoring these parameters, providing high-resolution data and covering large areas efficiently. Previous studies have faced challenges related to the scarcity and imbalance of hyperspectral samples and the effective selection of spectral bands for evaluating crop nutrients. Additionally, the simultaneous evaluation of multiple nutrient parameters using a common set of spectral bands has rarely been reported. Least Absolute Shrinkage and Selection Operator (LASSO) is an important method for hyperspectral band selection, but its linear fitting process is challenged by the complex relationship between spectral reflectance and plant properties. In this study, we propose a new band selection strategy that identifies the most informative spectral bands and improves model performance by combining the strengths of both LASSO selection of bands and machine learning’s ability to fit complex relationships. To address the issue of imbalanced field samples, we generated high-quality synthetic data using the synthetic minority oversampling technique for regression with Gaussian noise (SMOGN) algorithm. Three machine learning models (ANN, RF, and SVM) were then employed to predict alfalfa nutritional parameters. Our findings show that the proposed synergistic band selection strategy significantly improves model performance, yielding a 14–25 % reduction in RMSE while requiring only 37–59 % of the original spectral bands. By integrating this band selection strategy with the SMOGN method, our optimal model for estimating alfalfa nutrient parameters achieved R2 values of 0.92–0.95 and PRMSE values of 5.1–7.1 %. We observed the importance of the spectral regions around 730 nm and 960 nm for predicting alfalfa quality parameters. This finding suggests that existing satellite platforms such as Sentinel-2 and Landsat could improve the accuracy and efficiency of alfalfa quality monitoring by incorporating these specific spectral bands. Overall, our approach provides a robust and transferable framework for improving the accuracy and reliability of remote sensing-based crop quality monitoring, which is important for optimizing the spectral band configurations of future satellite sensors for precision agriculture.
中国河西走廊是苜蓿的主要种植区,准确及时地评估苜蓿的营养参数对于优化收割管理、最大限度地提高产量和确保优质饲草至关重要。基于无人机的高光谱遥感为监测这些参数提供了一种无损、高效的方法,可提供高分辨率数据并有效覆盖大面积区域。以往的研究面临着与高光谱样本的稀缺性和不平衡性有关的挑战,以及如何有效选择光谱波段来评估作物养分。此外,使用一组共同的光谱波段同时评估多个养分参数也鲜有报道。最小绝对收缩和选择算子(LASSO)是一种重要的高光谱波段选择方法,但其线性拟合过程受到光谱反射率与植物特性之间复杂关系的挑战。在本研究中,我们提出了一种新的波段选择策略,它能识别信息量最大的光谱波段,并通过结合 LASSO 波段选择和机器学习拟合复杂关系的能力来提高模型性能。为了解决野外样本不平衡的问题,我们使用高斯噪声回归(SMOGN)算法的合成少数过采样技术生成了高质量的合成数据。然后采用三种机器学习模型(ANN、RF 和 SVM)来预测紫花苜蓿的营养参数。我们的研究结果表明,所提出的协同波段选择策略显著提高了模型性能,RMSE 降低了 14-25%,而所需的原始光谱波段仅为 37-59%。通过将这一波段选择策略与 SMOGN 方法相结合,我们用于估算苜蓿营养参数的最佳模型的 R2 值为 0.92-0.95,PRMSE 值为 5.1-7.1%。我们发现,730 纳米和 960 纳米附近的光谱区域对预测苜蓿质量参数非常重要。这一发现表明,现有的卫星平台,如 Sentinel-2 和 Landsat,可以通过纳入这些特定的光谱波段来提高苜蓿质量监测的准确性和效率。总之,我们的方法为提高基于遥感的作物质量监测的准确性和可靠性提供了一个稳健且可转移的框架,这对于优化未来精准农业卫星传感器的光谱波段配置非常重要。
{"title":"Improving the estimation accuracy of alfalfa quality based on UAV hyperspectral imagery by using data enhancement and synergistic band selection strategies","authors":"Shuai Fu ,&nbsp;Jie Liu ,&nbsp;Jinlong Gao ,&nbsp;Qisheng Feng ,&nbsp;Senyao Feng ,&nbsp;Chunli Miao ,&nbsp;Yunhao Li ,&nbsp;Caixia Wu ,&nbsp;Tiangang Liang","doi":"10.1016/j.compag.2025.110305","DOIUrl":"10.1016/j.compag.2025.110305","url":null,"abstract":"<div><div>Accurate and timely assessment of alfalfa nutritional parameters is crucial for optimizing harvest management, maximizing yield, and ensuring high-quality forage in China’s Hexi Corridor, a key alfalfa-growing region. UAV-based hyperspectral remote sensing offers a nondestructive and efficient method for monitoring these parameters, providing high-resolution data and covering large areas efficiently. Previous studies have faced challenges related to the scarcity and imbalance of hyperspectral samples and the effective selection of spectral bands for evaluating crop nutrients. Additionally, the simultaneous evaluation of multiple nutrient parameters using a common set of spectral bands has rarely been reported. Least Absolute Shrinkage and Selection Operator (LASSO) is an important method for hyperspectral band selection, but its linear fitting process is challenged by the complex relationship between spectral reflectance and plant properties. In this study, we propose a new band selection strategy that identifies the most informative spectral bands and improves model performance by combining the strengths of both LASSO selection of bands and machine learning’s ability to fit complex relationships. To address the issue of imbalanced field samples, we generated high-quality synthetic data using the synthetic minority oversampling technique for regression with Gaussian noise (SMOGN) algorithm. Three machine learning models (ANN, RF, and SVM) were then employed to predict alfalfa nutritional parameters. Our findings show that the proposed synergistic band selection strategy significantly improves model performance, yielding a 14–25 % reduction in RMSE while requiring only 37–59 % of the original spectral bands. By integrating this band selection strategy with the SMOGN method, our optimal model for estimating alfalfa nutrient parameters achieved R<sup>2</sup> values of 0.92–0.95 and PRMSE values of 5.1–7.1 %. We observed the importance of the spectral regions around 730 nm and 960 nm for predicting alfalfa quality parameters. This finding suggests that existing satellite platforms such as Sentinel-2 and Landsat could improve the accuracy and efficiency of alfalfa quality monitoring by incorporating these specific spectral bands. Overall, our approach provides a robust and transferable framework for improving the accuracy and reliability of remote sensing-based crop quality monitoring, which is important for optimizing the spectral band configurations of future satellite sensors for precision agriculture.</div></div>","PeriodicalId":50627,"journal":{"name":"Computers and Electronics in Agriculture","volume":"234 ","pages":"Article 110305"},"PeriodicalIF":7.7,"publicationDate":"2025-03-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143645117","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Comparing different statistical models for predicting greenhouse gas emissions, energy-, and nitrogen intensity
IF 7.7 1区 农林科学 Q1 AGRICULTURE, MULTIDISCIPLINARY Pub Date : 2025-03-18 DOI: 10.1016/j.compag.2025.110209
Kristian Nikolai Jæger Hansen , Håvard Steinshamn , Sissel Hansen , Matthias Koesling , Tommy Dalgaard , Bjørn Gunnar Hansen
To evaluate the environmental impact across multiple dairy farms cost-effectively, the methodological framework for environmental assessments may be redefined. This article aims to assess the ability of various statistical tools to predict impact assessment made from a Life Cyle Assessment (LCA). The different models predicted estimates of Greenhouse Gas (GHG) emissions, Energy (E) and Nitrogen (N) intensity. The functional unit in the study was defined as 2.78 MJMM human-edible energy from milk and meat. This amount is equivalent to the edible energy in one kg of energy-corrected milk but includes energy from milk and meat. The GHG emissions (GWP100) were calculated as kg CO2-eq per number of FU delivered, E intensity as fossil and renewable energy used divided by number of FU delivered, and N intensity as kg N imported and produced divided by kg N delivered in milk or meat (kg N/kg N). These predictions were based on 24 independent variables describing farm characteristics, management, use of external inputs, and dairy herd characteristics.
All models were able to moderately estimate the results from the LCA calculations. However, their precision was low. Artificial Neural Network (ANN) was best for predicting GHG emissions on the test dataset, (RMSE = 0.50, R2 = 0.86), followed by Multiple Linear Regression (MLR) (RMSE = 0.68, R2 = 0.74). For E intensity, the Supported Vector Machine (SVM) model was performing best, (RMSE = 0.68, R2 = 0.73), followed by ANN (RMSE = 0.55, R2 = 0.71,) and Gradient Boosting Machine (GBM) (RMSE = 0.55, R2 = 0.71). For N intensity predictions the Multiple Linear Regression (MLR) (RMSE = 0.36, R2 = 0.89) and Lasso regression (RMSE = 0.36, R2 = 0.88), followed by the ANN (RMSE = 0.41, R2 = 0.86,). In this study, machine learning provided some benefits in prediction of GHG emission, over simpler models like Multiple Linear Regressions with backward selection. This benefit was limited for N and E intensity. The precision of predictions improved most when including the variables “fertiliser import nitrogen” (kg N/ha) and “proportion of milking cows” (number of dairy cows/number of all cattle) for predicting GHG emission across the different models. The inclusion of “fertiliser import nitrogen” was also important across the different models and prediction of E and N intensity.
{"title":"Comparing different statistical models for predicting greenhouse gas emissions, energy-, and nitrogen intensity","authors":"Kristian Nikolai Jæger Hansen ,&nbsp;Håvard Steinshamn ,&nbsp;Sissel Hansen ,&nbsp;Matthias Koesling ,&nbsp;Tommy Dalgaard ,&nbsp;Bjørn Gunnar Hansen","doi":"10.1016/j.compag.2025.110209","DOIUrl":"10.1016/j.compag.2025.110209","url":null,"abstract":"<div><div>To evaluate the environmental impact across multiple dairy farms cost-effectively, the methodological framework for environmental assessments may be redefined. This article aims to assess the ability of various statistical tools to predict impact assessment made from a Life Cyle Assessment (LCA). The different models predicted estimates of Greenhouse Gas (GHG) emissions, Energy (E) and Nitrogen (N) intensity. The functional unit in the study was defined as 2.78 MJ<sub>MM</sub> human-edible energy from milk and meat. This amount is equivalent to the edible energy in one kg of energy-corrected milk but includes energy from milk and meat. The GHG emissions (GWP<sub>100</sub>) were calculated as kg CO<sub>2</sub>-eq per number of FU delivered, E intensity as fossil and renewable energy used divided by number of FU delivered, and N intensity as kg N imported and produced divided by kg N delivered in milk or meat (kg N/kg N). These predictions were based on 24 independent variables describing farm characteristics, management, use of external inputs, and dairy herd characteristics.</div><div>All models were able to moderately estimate the results from the LCA calculations. However, their precision was low. Artificial Neural Network (ANN) was best for predicting GHG emissions on the test dataset, (RMSE = 0.50, R<sup>2</sup> = 0.86), followed by Multiple Linear Regression (MLR) (RMSE = 0.68, R<sup>2</sup> = 0.74). For E intensity, the Supported Vector Machine (SVM) model was performing best, (RMSE = 0.68, R<sup>2</sup> = 0.73), followed by ANN (RMSE = 0.55, R<sup>2</sup> = 0.71,) and Gradient Boosting Machine (GBM) (RMSE = 0.55, R<sup>2</sup> = 0.71). For N intensity predictions the Multiple Linear Regression (MLR) (RMSE = 0.36, R<sup>2</sup> = 0.89) and Lasso regression (RMSE = 0.36, R<sup>2</sup> = 0.88), followed by the ANN (RMSE = 0.41, R<sup>2</sup> = 0.86,). In this study, machine learning provided some benefits in prediction of GHG emission, over simpler models like Multiple Linear Regressions with backward selection. This benefit was limited for N and E intensity. The precision of predictions improved most when including the variables “fertiliser import nitrogen” (kg N/ha) and “proportion of milking cows” (number of dairy cows/number of all cattle) for predicting GHG emission across the different models. The inclusion of “fertiliser import nitrogen” was also important across the different models and prediction of E and N intensity.</div></div>","PeriodicalId":50627,"journal":{"name":"Computers and Electronics in Agriculture","volume":"234 ","pages":"Article 110209"},"PeriodicalIF":7.7,"publicationDate":"2025-03-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143644332","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A novel architecture for automated delineation of the agricultural fields using partial training data in remote sensing images
IF 7.7 1区 农林科学 Q1 AGRICULTURE, MULTIDISCIPLINARY Pub Date : 2025-03-18 DOI: 10.1016/j.compag.2025.110265
Sumesh KC , Jagannath Aryal , Dongryeol Ryu
Digital agricultural services (DAS) rely on timely and accurate spatial information of agricultural fields. Initiatives, including deep learning (DL), have been used to extract accurate spatial information using remote sensing images. However, DL approaches require a large amount of fully segmented and labelled field boundary data for training that is not readily available. Obtaining high-quality training data is often costly and time-consuming. To address this challenge, we develop a multi-scale, multi-task DL-based novel architecture with two modules, an edge enhancement block (EEB) and a spatial attention block (SAB), using partial training data (i.e., weak supervision). This architecture is capable of delineating narrow and weak boundaries of agricultural fields. The model simultaneously learns three tasks: boundary prediction, extent prediction and distance estimation, and enhances the generalisability of the network. The EEB module extracts semantic edge features at multiple levels. The SAB module integrates the features from the encoder and decoder to improve the geometric accuracy of field boundary delineation. We conduct an experiment in Ille-et-Vilaine, France, using time-series monthly composite images from Sentinel-2 to capture key phenological stages of crops. The segmentation output from different months is combined and post-processed to generate individual field instances using hierarchical watershed segmentation. The performance of our method is superior in both pixel-based (86.42% Matthew’s correlation coefficient (MCC)) and object-based accuracy measures (76% shape similarity and 60% intersection over union (IoU)) to existing multi-task models. The ablation study shows that the EEB and SAB modules enhance the efficiency of feature extraction relevant to field extent and boundaries and improve accuracy. We conclude that the developed model and method can be used to improve the extraction of agricultural fields under weak supervision and different settings (satellite sensors and agricultural landscape).
{"title":"A novel architecture for automated delineation of the agricultural fields using partial training data in remote sensing images","authors":"Sumesh KC ,&nbsp;Jagannath Aryal ,&nbsp;Dongryeol Ryu","doi":"10.1016/j.compag.2025.110265","DOIUrl":"10.1016/j.compag.2025.110265","url":null,"abstract":"<div><div>Digital agricultural services (DAS) rely on timely and accurate spatial information of agricultural fields. Initiatives, including deep learning (DL), have been used to extract accurate spatial information using remote sensing images. However, DL approaches require a large amount of fully segmented and labelled field boundary data for training that is not readily available. Obtaining high-quality training data is often costly and time-consuming. To address this challenge, we develop a multi-scale, multi-task DL-based novel architecture with two modules, an edge enhancement block (EEB) and a spatial attention block (SAB), using partial training data (i.e., weak supervision). This architecture is capable of delineating narrow and weak boundaries of agricultural fields. The model simultaneously learns three tasks: boundary prediction, extent prediction and distance estimation, and enhances the generalisability of the network. The EEB module extracts semantic edge features at multiple levels. The SAB module integrates the features from the encoder and decoder to improve the geometric accuracy of field boundary delineation. We conduct an experiment in Ille-et-Vilaine, France, using time-series monthly composite images from Sentinel-2 to capture key phenological stages of crops. The segmentation output from different months is combined and post-processed to generate individual field instances using hierarchical watershed segmentation. The performance of our method is superior in both pixel-based (86.42% Matthew’s correlation coefficient (MCC)) and object-based accuracy measures (76% shape similarity and 60% intersection over union (IoU)) to existing multi-task models. The ablation study shows that the EEB and SAB modules enhance the efficiency of feature extraction relevant to field extent and boundaries and improve accuracy. We conclude that the developed model and method can be used to improve the extraction of agricultural fields under weak supervision and different settings (satellite sensors and agricultural landscape).</div></div>","PeriodicalId":50627,"journal":{"name":"Computers and Electronics in Agriculture","volume":"234 ","pages":"Article 110265"},"PeriodicalIF":7.7,"publicationDate":"2025-03-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143645108","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An automatic landmarking algorithm for leaf morphology based on conformal mapping
IF 7.7 1区 农林科学 Q1 AGRICULTURE, MULTIDISCIPLINARY Pub Date : 2025-03-17 DOI: 10.1016/j.compag.2025.110274
Peige Zhong , Xiaojun Liu , Yulu Ye , Rui Zhang , Hu Zhou , Yan Guo , Baoguo Li , Jinyu Zhu , Yuntao Ma
Leaf shape is of great significance in plant phenotype research. Landmarks method is a widely used morphometric approach, which can comprehensively describe the morphological differences among leaves. However, the selection of landmarks is time-consuming and laborious. An automatic landmarking algorithm is proposed here. Based on conformal mapping, the leaf outline can be transformed into a monotonically increasing function curve, referred to as the ’fingerprint function’. The Dynamic Time Warping (DTW) algorithm was introduced to match landmarks between different leaves. Two leaf datasets were used to validate the algorithm separately in different species and developmental stages. Dataset1 is a public dataset which covers 26 different types of leaves. The average positional difference between automatic and manual landmarks for dataset1 was only 2.95%. Dataset2 consists of cotton leaves collected in the field at various growth stages, and the positional difference for this dataset was all below 5%. These results validate that our algorithm is applicable to a wide range of leaf types and capable of identifying and locating novel features that emerge during leaf growth. The automatic landmarking algorithm can simulate manual landmarking to a great extent. It provides a new approach for automated acquisition of plant leaf shape homology tailored to the research needs of botanists.
{"title":"An automatic landmarking algorithm for leaf morphology based on conformal mapping","authors":"Peige Zhong ,&nbsp;Xiaojun Liu ,&nbsp;Yulu Ye ,&nbsp;Rui Zhang ,&nbsp;Hu Zhou ,&nbsp;Yan Guo ,&nbsp;Baoguo Li ,&nbsp;Jinyu Zhu ,&nbsp;Yuntao Ma","doi":"10.1016/j.compag.2025.110274","DOIUrl":"10.1016/j.compag.2025.110274","url":null,"abstract":"<div><div>Leaf shape is of great significance in plant phenotype research. Landmarks method is a widely used morphometric approach, which can comprehensively describe the morphological differences among leaves. However, the selection of landmarks is time-consuming and laborious. An automatic landmarking algorithm is proposed here. Based on conformal mapping, the leaf outline can be transformed into a monotonically increasing function curve, referred to as the ’fingerprint function’. The Dynamic Time Warping (DTW) algorithm was introduced to match landmarks between different leaves. Two leaf datasets were used to validate the algorithm separately in different species and developmental stages. Dataset1 is a public dataset which covers 26 different types of leaves. The average positional difference between automatic and manual landmarks for dataset1 was only 2.95%. Dataset2 consists of cotton leaves collected in the field at various growth stages, and the positional difference for this dataset was all below 5%. These results validate that our algorithm is applicable to a wide range of leaf types and capable of identifying and locating novel features that emerge during leaf growth. The automatic landmarking algorithm can simulate manual landmarking to a great extent. It provides a new approach for automated acquisition of plant leaf shape homology tailored to the research needs of botanists.</div></div>","PeriodicalId":50627,"journal":{"name":"Computers and Electronics in Agriculture","volume":"234 ","pages":"Article 110274"},"PeriodicalIF":7.7,"publicationDate":"2025-03-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143631972","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Computers and Electronics in Agriculture
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1