首页 > 最新文献

Plant Phenomics最新文献

英文 中文
One to All: Toward a Unified Model for Counting Cereal Crop Heads Based on Few-Shot Learning. 从一个到所有:基于少镜头学习的谷物作物头数计数统一模型。
IF 7.6 1区 农林科学 Q1 AGRONOMY Pub Date : 2024-11-28 eCollection Date: 2024-01-01 DOI: 10.34133/plantphenomics.0271
Qiang Wang, Xijian Fan, Ziqing Zhuang, Tardi Tjahjadi, Shichao Jin, Honghua Huan, Qiaolin Ye

Accurate counting of cereals crops, e.g., maize, rice, sorghum, and wheat, is crucial for estimating grain production and ensuring food security. However, existing methods for counting cereal crops focus predominantly on building models for specific crop head; thus, they lack generalizability to different crop varieties. This paper presents Counting Heads of Cereal Crops Net (CHCNet), which is a unified model designed for counting multiple cereal crop heads by few-shot learning, which effectively reduces labeling costs. Specifically, a refined vision encoder is developed to enhance feature embedding, where a foundation model, namely, the segment anything model (SAM), is employed to emphasize the marked crop heads while mitigating complex background effects. Furthermore, a multiscale feature interaction module is proposed for integrating a similarity metric to facilitate automatic learning of crop-specific features across varying scales, which enhances the ability to describe crop heads of various sizes and shapes. The CHCNet model adopts a 2-stage training procedure. The initial stage focuses on latent feature mining to capture common feature representations of cereal crops. In the subsequent stage, inference is performed without additional training, by extracting domain-specific features of the target crop from selected exemplars to accomplish the counting task. In extensive experiments on 6 diverse crop datasets captured from ground cameras and drones, CHCNet substantially outperformed state-of-the-art counting methods in terms of cross-crop generalization ability, achieving mean absolute errors (MAEs) of 9.96 and 9.38 for maize, 13.94 for sorghum, 7.94 for rice, and 15.62 for mixed crops. A user-friendly interactive demo is available at http://cerealcropnet.com/, where researchers are invited to personally evaluate the proposed CHCNet. The source code for implementing CHCNet is available at https://github.com/Small-flyguy/CHCNet.

玉米、水稻、高粱和小麦等谷物作物的准确计数对于估计粮食产量和确保粮食安全至关重要。然而,现有的谷物作物计数方法主要集中在为特定的作物头建立模型;因此,它们缺乏对不同作物品种的通用性。CHCNet是一种采用少射学习的方法对多个谷类作物进行计数的统一模型,有效地降低了标注成本。具体而言,开发了一种改进的视觉编码器来增强特征嵌入,其中使用基础模型即分割任意模型(SAM)来强调标记的作物头,同时减轻复杂的背景影响。在此基础上,提出了一种多尺度特征交互模块,用于整合相似性度量,实现不同尺度作物特征的自动学习,增强了对不同尺寸和形状的作物头的描述能力。CHCNet模型采用两阶段训练程序。初始阶段侧重于潜在特征挖掘,以捕获谷类作物的共同特征表示。在随后的阶段,通过从选定的样本中提取目标作物的特定领域特征来完成计数任务,无需额外的训练即可执行推理。在地面摄像机和无人机采集的6种不同作物数据集上进行的大量实验中,CHCNet在跨作物泛化能力方面大大优于最先进的统计方法,玉米的平均绝对误差(MAEs)为9.96和9.38,高粱为13.94,水稻为7.94,混合作物为15.62。一个用户友好的交互式演示可以在http://cerealcropnet.com/上获得,研究人员被邀请亲自评估拟议的CHCNet。实现CHCNet的源代码可在https://github.com/Small-flyguy/CHCNet获得。
{"title":"One to All: Toward a Unified Model for Counting Cereal Crop Heads Based on Few-Shot Learning.","authors":"Qiang Wang, Xijian Fan, Ziqing Zhuang, Tardi Tjahjadi, Shichao Jin, Honghua Huan, Qiaolin Ye","doi":"10.34133/plantphenomics.0271","DOIUrl":"10.34133/plantphenomics.0271","url":null,"abstract":"<p><p>Accurate counting of cereals crops, e.g., maize, rice, sorghum, and wheat, is crucial for estimating grain production and ensuring food security. However, existing methods for counting cereal crops focus predominantly on building models for specific crop head; thus, they lack generalizability to different crop varieties. This paper presents Counting Heads of Cereal Crops Net (CHCNet), which is a unified model designed for counting multiple cereal crop heads by few-shot learning, which effectively reduces labeling costs. Specifically, a refined vision encoder is developed to enhance feature embedding, where a foundation model, namely, the segment anything model (SAM), is employed to emphasize the marked crop heads while mitigating complex background effects. Furthermore, a multiscale feature interaction module is proposed for integrating a similarity metric to facilitate automatic learning of crop-specific features across varying scales, which enhances the ability to describe crop heads of various sizes and shapes. The CHCNet model adopts a 2-stage training procedure. The initial stage focuses on latent feature mining to capture common feature representations of cereal crops. In the subsequent stage, inference is performed without additional training, by extracting domain-specific features of the target crop from selected exemplars to accomplish the counting task. In extensive experiments on 6 diverse crop datasets captured from ground cameras and drones, CHCNet substantially outperformed state-of-the-art counting methods in terms of cross-crop generalization ability, achieving mean absolute errors (MAEs) of 9.96 and 9.38 for maize, 13.94 for sorghum, 7.94 for rice, and 15.62 for mixed crops. A user-friendly interactive demo is available at http://cerealcropnet.com/, where researchers are invited to personally evaluate the proposed CHCNet. The source code for implementing CHCNet is available at https://github.com/Small-flyguy/CHCNet.</p>","PeriodicalId":20318,"journal":{"name":"Plant Phenomics","volume":"6 ","pages":"0271"},"PeriodicalIF":7.6,"publicationDate":"2024-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11639208/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142829642","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Drone-Based Digital Phenotyping to Evaluating Relative Maturity, Stand Count, and Plant Height in Dry Beans (Phaseolus vulgaris L.). 基于无人机的干豆(Phaseolus vulgaris L.)相对成熟度、林分数和株高的数字表型评价
IF 7.6 1区 农林科学 Q1 AGRONOMY Pub Date : 2024-11-28 eCollection Date: 2024-01-01 DOI: 10.34133/plantphenomics.0278
Leonardo Volpato, Evan M Wright, Francisco E Gomez

Substantial effort has been made in manually tracking plant maturity and to measure early-stage plant density and crop height in experimental fields. In this study, RGB drone imagery and deep learning (DL) approaches are explored to measure relative maturity (RM), stand count (SC), and plant height (PH), potentially offering higher throughput, accuracy, and cost-effectiveness than traditional methods. A time series of drone images was utilized to estimate dry bean RM employing a hybrid convolutional neural network (CNN) and long short-term memory (LSTM) model. For early-stage SC assessment, Faster RCNN object detection algorithm was evaluated. Flight frequencies, image resolution, and data augmentation techniques were investigated to enhance DL model performance. PH was obtained using a quantile method from digital surface model (DSM) and point cloud (PC) data sources. The CNN-LSTM model showed high accuracy in RM prediction across various conditions, outperforming traditional image preprocessing approaches. The inclusion of growing degree days (GDD) data improved the model's performance under specific environmental stresses. The Faster R-CNN model effectively identified early-stage bean plants, demonstrating superior accuracy over traditional methods and consistency across different flight altitudes. For PH estimation, moderate correlations with ground-truth data were observed across both datasets analyzed. The choice between PC and DSM source data may depend on specific environmental and flight conditions. Overall, the CNN-LSTM and Faster R-CNN models proved more effective than conventional techniques in quantifying RM and SC. The subtraction method proposed for estimating PH without accurate ground elevation data yielded results comparable to the difference-based method. Additionally, the pipeline and open-source software developed hold potential to significantly benefit the phenotyping community.

在试验田,人工跟踪植物成熟期、测定早期植株密度和作物高度已经做了大量的工作。本研究探索了RGB无人机图像和深度学习(DL)方法来测量相对成熟度(RM)、林分数(SC)和植物高度(PH),可能提供比传统方法更高的吞吐量、准确性和成本效益。利用无人机图像的时间序列,采用卷积神经网络(CNN)和长短期记忆(LSTM)混合模型估计干豆RM。对于早期SC评估,评估了更快的RCNN目标检测算法。研究了飞行频率、图像分辨率和数据增强技术来增强DL模型的性能。PH值采用分位数法从数字曲面模型(DSM)和点云(PC)数据源中获得。CNN-LSTM模型在各种条件下的RM预测精度较高,优于传统的图像预处理方法。加入生长日数(GDD)数据提高了模型在特定环境胁迫下的性能。Faster R-CNN模型有效地识别了早期豆类植物,比传统方法具有更高的准确性和不同飞行高度的一致性。对于PH估计,在分析的两个数据集中观察到与真实数据的适度相关性。PC和DSM源数据的选择可能取决于具体的环境和飞行条件。总体而言,CNN-LSTM和Faster R-CNN模型在量化RM和SC方面比传统技术更有效。在没有精确地面高程数据的情况下,用于估算PH的减法方法的结果与基于差分的方法相当。此外,开发的管道和开源软件具有显著造福表型社区的潜力。
{"title":"Drone-Based Digital Phenotyping to Evaluating Relative Maturity, Stand Count, and Plant Height in Dry Beans (<i>Phaseolus vulgaris</i> L.).","authors":"Leonardo Volpato, Evan M Wright, Francisco E Gomez","doi":"10.34133/plantphenomics.0278","DOIUrl":"10.34133/plantphenomics.0278","url":null,"abstract":"<p><p>Substantial effort has been made in manually tracking plant maturity and to measure early-stage plant density and crop height in experimental fields. In this study, RGB drone imagery and deep learning (DL) approaches are explored to measure relative maturity (RM), stand count (SC), and plant height (PH), potentially offering higher throughput, accuracy, and cost-effectiveness than traditional methods. A time series of drone images was utilized to estimate dry bean RM employing a hybrid convolutional neural network (CNN) and long short-term memory (LSTM) model. For early-stage SC assessment, Faster RCNN object detection algorithm was evaluated. Flight frequencies, image resolution, and data augmentation techniques were investigated to enhance DL model performance. PH was obtained using a quantile method from digital surface model (DSM) and point cloud (PC) data sources. The CNN-LSTM model showed high accuracy in RM prediction across various conditions, outperforming traditional image preprocessing approaches. The inclusion of growing degree days (GDD) data improved the model's performance under specific environmental stresses. The Faster R-CNN model effectively identified early-stage bean plants, demonstrating superior accuracy over traditional methods and consistency across different flight altitudes. For PH estimation, moderate correlations with ground-truth data were observed across both datasets analyzed. The choice between PC and DSM source data may depend on specific environmental and flight conditions. Overall, the CNN-LSTM and Faster R-CNN models proved more effective than conventional techniques in quantifying RM and SC. The subtraction method proposed for estimating PH without accurate ground elevation data yielded results comparable to the difference-based method. Additionally, the pipeline and open-source software developed hold potential to significantly benefit the phenotyping community.</p>","PeriodicalId":20318,"journal":{"name":"Plant Phenomics","volume":"6 ","pages":"0278"},"PeriodicalIF":7.6,"publicationDate":"2024-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11602537/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142751325","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
PlanText: Gradually Masked Guidance to Align Image Phenotypes with Trait Descriptions for Plant Disease Texts. PlanText:为植物病害文本的图像表型与性状描述对齐提供渐进式遮蔽引导。
IF 7.6 1区 农林科学 Q1 AGRONOMY Pub Date : 2024-11-26 eCollection Date: 2024-01-01 DOI: 10.34133/plantphenomics.0272
Kejun Zhao, Xingcai Wu, Yuanyuan Xiao, Sijun Jiang, Peijia Yu, Yazhou Wang, Qi Wang

Plant diseases are a critical driver of the global food crisis. The integration of advanced artificial intelligence technologies can substantially enhance plant disease diagnostics. However, current methods for early and complex detection remain challenging. Employing multimodal technologies, akin to medical artificial intelligence diagnostics that combine diverse data types, may offer a more effective solution. Presently, the reliance on single-modal data predominates in plant disease research, which limits the scope for early and detailed diagnosis. Consequently, developing text modality generation techniques is essential for overcoming the limitations in plant disease recognition. To this end, we propose a method for aligning plant phenotypes with trait descriptions, which diagnoses text by progressively masking disease images. First, for training and validation, we annotate 5,728 disease phenotype images with expert diagnostic text and provide annotated text and trait labels for 210,000 disease images. Then, we propose a PhenoTrait text description model, which consists of global and heterogeneous feature encoders as well as switching-attention decoders, for accurate context-aware output. Next, to generate a more phenotypically appropriate description, we adopt 3 stages of embedding image features into semantic structures, which generate characterizations that preserve trait features. Finally, our experimental results show that our model outperforms several frontier models in multiple trait descriptions, including the larger models GPT-4 and GPT-4o. Our code and dataset are available at https://plantext.samlab.cn/.

植物病害是全球粮食危机的一个重要驱动因素。整合先进的人工智能技术可以大大提高植物病害诊断水平。然而,目前的早期和复杂检测方法仍然具有挑战性。采用多模态技术,类似于结合多种数据类型的医学人工智能诊断,可能会提供更有效的解决方案。目前,植物病害研究主要依赖单一模式数据,这限制了早期和详细诊断的范围。因此,开发文本模态生成技术对于克服植物病害识别的局限性至关重要。为此,我们提出了一种将植物表型与性状描述对齐的方法,该方法通过逐步遮蔽病害图像来诊断文本。首先,为了训练和验证,我们用专家诊断文本注释了 5,728 幅病害表型图像,并为 210,000 幅病害图像提供了注释文本和性状标签。然后,我们提出了一个 PhenoTrait 文本描述模型,该模型由全局和异构特征编码器以及切换注意力解码器组成,可实现准确的上下文感知输出。接下来,为了生成更适合表型的描述,我们采用了将图像特征嵌入语义结构的 3 个阶段,从而生成保留了性状特征的描述。最后,我们的实验结果表明,我们的模型在多个性状描述方面优于多个前沿模型,包括较大的模型 GPT-4 和 GPT-4o。我们的代码和数据集可在 https://plantext.samlab.cn/ 上获取。
{"title":"PlanText: Gradually Masked Guidance to Align Image Phenotypes with Trait Descriptions for Plant Disease Texts.","authors":"Kejun Zhao, Xingcai Wu, Yuanyuan Xiao, Sijun Jiang, Peijia Yu, Yazhou Wang, Qi Wang","doi":"10.34133/plantphenomics.0272","DOIUrl":"10.34133/plantphenomics.0272","url":null,"abstract":"<p><p>Plant diseases are a critical driver of the global food crisis. The integration of advanced artificial intelligence technologies can substantially enhance plant disease diagnostics. However, current methods for early and complex detection remain challenging. Employing multimodal technologies, akin to medical artificial intelligence diagnostics that combine diverse data types, may offer a more effective solution. Presently, the reliance on single-modal data predominates in plant disease research, which limits the scope for early and detailed diagnosis. Consequently, developing text modality generation techniques is essential for overcoming the limitations in plant disease recognition. To this end, we propose a method for aligning plant phenotypes with trait descriptions, which diagnoses text by progressively masking disease images. First, for training and validation, we annotate 5,728 disease phenotype images with expert diagnostic text and provide annotated text and trait labels for 210,000 disease images. Then, we propose a PhenoTrait text description model, which consists of global and heterogeneous feature encoders as well as switching-attention decoders, for accurate context-aware output. Next, to generate a more phenotypically appropriate description, we adopt 3 stages of embedding image features into semantic structures, which generate characterizations that preserve trait features. Finally, our experimental results show that our model outperforms several frontier models in multiple trait descriptions, including the larger models GPT-4 and GPT-4o. Our code and dataset are available at https://plantext.samlab.cn/.</p>","PeriodicalId":20318,"journal":{"name":"Plant Phenomics","volume":"6 ","pages":"0272"},"PeriodicalIF":7.6,"publicationDate":"2024-11-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11589250/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142732084","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multi-Scale Attention Network for Vertical Seed Distribution in Soybean Breeding Fields. 用于大豆育种田垂直种子分配的多尺度注意力网络。
IF 7.6 1区 农林科学 Q1 AGRONOMY Pub Date : 2024-11-10 eCollection Date: 2024-01-01 DOI: 10.34133/plantphenomics.0260
Tang Li, Pieter M Blok, James Burridge, Akito Kaga, Wei Guo

The increase in the global population is leading to a doubling of the demand for protein. Soybean (Glycine max), a key contributor to global plant-based protein supplies, requires ongoing yield enhancements to keep pace with increasing demand. Precise, on-plant seed counting and localization may catalyze breeding selection of shoot architectures and seed localization patterns related to superior performance in high planting density and contribute to increased yield. Traditional manual counting and localization methods are labor-intensive and prone to error, necessitating more efficient approaches for yield prediction and seed distribution analysis. To solve this, we propose MSANet: a novel deep learning framework tailored for counting and localization of soybean seeds on mature field-grown soy plants. A multi-scale attention map mechanism was applied to maximize model performance in seed counting and localization in soybean breeding fields. We compared our model with a previous state-of-the-art model using the benchmark dataset and an enlarged dataset, including various soybean genotypes. Our model outperforms previous state-of-the-art methods on all datasets across various soybean genotypes on both counting and localization tasks. Furthermore, our model also performed well on in-canopy 360° video, dramatically increasing data collection efficiency. We also propose a technique that enables previously inaccessible insights into the phenotypic and genetic diversity of single plant vertical seed distribution, which may accelerate the breeding process. To accelerate further research in this domain, we have made our dataset and software publicly available: https://github.com/UTokyo-FieldPhenomics-Lab/MSANet.

全球人口的增长导致对蛋白质的需求翻了一番。大豆(Glycine max)是全球植物蛋白供应的主要来源,需要不断提高产量以满足日益增长的需求。精确的植株上种子计数和定位可促进育种选择与高种植密度下优异性能相关的芽结构和种子定位模式,并有助于提高产量。传统的人工计数和定位方法耗费大量人力且容易出错,因此需要更高效的方法来进行产量预测和种子分布分析。为了解决这个问题,我们提出了 MSANet:一种新颖的深度学习框架,专为大豆成熟田间种植植株上的大豆种子计数和定位而定制。我们采用了多尺度注意力图机制,以最大限度地提高模型在大豆育种田种子计数和定位中的性能。我们使用基准数据集和包括各种大豆基因型在内的扩大数据集,将我们的模型与之前最先进的模型进行了比较。在各种大豆基因型的所有数据集上,我们的模型在计数和定位任务上都优于之前的先进方法。此外,我们的模型在树冠内 360° 视频上也表现出色,大大提高了数据收集效率。我们还提出了一种技术,能让人们深入了解以前无法获得的单株垂直种子分布的表型和遗传多样性,从而加快育种进程。为了加快该领域的进一步研究,我们已将数据集和软件公开:https://github.com/UTokyo-FieldPhenomics-Lab/MSANet。
{"title":"Multi-Scale Attention Network for Vertical Seed Distribution in Soybean Breeding Fields.","authors":"Tang Li, Pieter M Blok, James Burridge, Akito Kaga, Wei Guo","doi":"10.34133/plantphenomics.0260","DOIUrl":"https://doi.org/10.34133/plantphenomics.0260","url":null,"abstract":"<p><p>The increase in the global population is leading to a doubling of the demand for protein. Soybean (<i>Glycine max</i>), a key contributor to global plant-based protein supplies, requires ongoing yield enhancements to keep pace with increasing demand. Precise, on-plant seed counting and localization may catalyze breeding selection of shoot architectures and seed localization patterns related to superior performance in high planting density and contribute to increased yield. Traditional manual counting and localization methods are labor-intensive and prone to error, necessitating more efficient approaches for yield prediction and seed distribution analysis. To solve this, we propose MSANet: a novel deep learning framework tailored for counting and localization of soybean seeds on mature field-grown soy plants. A multi-scale attention map mechanism was applied to maximize model performance in seed counting and localization in soybean breeding fields. We compared our model with a previous state-of-the-art model using the benchmark dataset and an enlarged dataset, including various soybean genotypes. Our model outperforms previous state-of-the-art methods on all datasets across various soybean genotypes on both counting and localization tasks. Furthermore, our model also performed well on in-canopy 360° video, dramatically increasing data collection efficiency. We also propose a technique that enables previously inaccessible insights into the phenotypic and genetic diversity of single plant vertical seed distribution, which may accelerate the breeding process. To accelerate further research in this domain, we have made our dataset and software publicly available: https://github.com/UTokyo-FieldPhenomics-Lab/MSANet.</p>","PeriodicalId":20318,"journal":{"name":"Plant Phenomics","volume":"6 ","pages":"0260"},"PeriodicalIF":7.6,"publicationDate":"2024-11-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11550408/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142625881","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Counting Canola: Toward Generalizable Aerial Plant Detection Models. 计算油菜籽:建立可通用的空中植物探测模型
IF 7.6 1区 农林科学 Q1 AGRONOMY Pub Date : 2024-11-08 eCollection Date: 2024-01-01 DOI: 10.34133/plantphenomics.0268
Erik Andvaag, Kaylie Krys, Steven J Shirtliffe, Ian Stavness

Plant population counts are highly valued by crop producers as important early-season indicators of field health. Traditionally, emergence rate estimates have been acquired through manual counting, an approach that is labor-intensive and relies heavily on sampling techniques. By applying deep learning-based object detection models to aerial field imagery, accurate plant population counts can be obtained for much larger areas of a field. Unfortunately, current detection models often perform poorly when they are faced with image conditions that do not closely resemble the data found in their training sets. In this paper, we explore how specific facets of a plant detector's training set can affect its ability to generalize to unseen image sets. In particular, we examine how a plant detection model's generalizability is influenced by the size, diversity, and quality of its training data. Our experiments show that the gap between in-distribution and out-of-distribution performance cannot be closed by merely increasing the size of a model's training set. We also demonstrate the importance of training set diversity in producing generalizable models, and show how different types of annotation noise can elicit different model behaviors in out-of-distribution test sets. We conduct our investigations with a large and diverse dataset of canola field imagery that we assembled over several years. We also present a new web tool, Canola Counter, which is specifically designed for remote-sensed aerial plant detection tasks. We use the Canola Counter tool to prepare our annotated canola seedling dataset and conduct our experiments. Both our dataset and web tool are publicly available.

作物生产者非常重视植物数量计数,将其作为田间健康状况的重要早季指标。传统上,出苗率估算是通过人工计数获得的,这种方法劳动密集,且严重依赖采样技术。通过将基于深度学习的目标检测模型应用于航空田间图像,可以获得大得多的田间精确植物数量计数。遗憾的是,当前的检测模型在面对与其训练集中的数据并不十分相似的图像条件时,往往表现不佳。在本文中,我们将探讨植物检测器训练集的特定方面如何影响其对未见图像集的泛化能力。特别是,我们研究了植物检测模型的泛化能力如何受到其训练数据的大小、多样性和质量的影响。我们的实验表明,仅仅增加模型训练集的大小并不能缩小分布内和分布外性能之间的差距。我们还证明了训练集多样性在生成可泛化模型方面的重要性,并展示了不同类型的注释噪声如何在分布外测试集中引发不同的模型行为。我们利用数年来收集的大量不同的油菜花田图像数据集进行了研究。我们还介绍了一种新的网络工具--油菜花计数器,该工具专为遥感航空植物检测任务而设计。我们使用 Canola Counter 工具来准备油菜籽幼苗注释数据集并进行实验。我们的数据集和网络工具均可公开获取。
{"title":"Counting Canola: Toward Generalizable Aerial Plant Detection Models.","authors":"Erik Andvaag, Kaylie Krys, Steven J Shirtliffe, Ian Stavness","doi":"10.34133/plantphenomics.0268","DOIUrl":"https://doi.org/10.34133/plantphenomics.0268","url":null,"abstract":"<p><p>Plant population counts are highly valued by crop producers as important early-season indicators of field health. Traditionally, emergence rate estimates have been acquired through manual counting, an approach that is labor-intensive and relies heavily on sampling techniques. By applying deep learning-based object detection models to aerial field imagery, accurate plant population counts can be obtained for much larger areas of a field. Unfortunately, current detection models often perform poorly when they are faced with image conditions that do not closely resemble the data found in their training sets. In this paper, we explore how specific facets of a plant detector's training set can affect its ability to generalize to unseen image sets. In particular, we examine how a plant detection model's generalizability is influenced by the size, diversity, and quality of its training data. Our experiments show that the gap between in-distribution and out-of-distribution performance cannot be closed by merely increasing the size of a model's training set. We also demonstrate the importance of training set diversity in producing generalizable models, and show how different types of annotation noise can elicit different model behaviors in out-of-distribution test sets. We conduct our investigations with a large and diverse dataset of canola field imagery that we assembled over several years. We also present a new web tool, Canola Counter, which is specifically designed for remote-sensed aerial plant detection tasks. We use the Canola Counter tool to prepare our annotated canola seedling dataset and conduct our experiments. Both our dataset and web tool are publicly available.</p>","PeriodicalId":20318,"journal":{"name":"Plant Phenomics","volume":"6 ","pages":"0268"},"PeriodicalIF":7.6,"publicationDate":"2024-11-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11543947/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142625923","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Phenotyping of Panicle Number and Shape in Rice Breeding Materials Based on Unmanned Aerial Vehicle Imagery. 基于无人飞行器图像的水稻育种材料圆锥花序数量和形状表型分析
IF 7.6 1区 农林科学 Q1 AGRONOMY Pub Date : 2024-10-24 eCollection Date: 2024-01-01 DOI: 10.34133/plantphenomics.0265
Xuqi Lu, Yutao Shen, Jiayang Xie, Xin Yang, Qingyao Shu, Song Chen, Zhihui Shen, Haiyan Cen

The number of panicles per unit area (PNpA) is one of the key factors contributing to the grain yield of rice crops. Accurate PNpA quantification is vital for breeding high-yield rice cultivars. Previous studies were based on proximal sensing with fixed observation platforms or unmanned aerial vehicles (UAVs). The near-canopy images produced in these studies suffer from inefficiency and complex image processing pipelines that require manual image cropping and annotation. This study aims to develop an automated, high-throughput UAV imagery-based approach for field plot segmentation and panicle number quantification, along with a novel classification method for different panicle types, enhancing PNpA quantification at the plot level. RGB images of the rice canopy were efficiently captured at an altitude of 15 m, followed by image stitching and plot boundary recognition via a mask region-based convolutional neural network (Mask R-CNN). The images were then segmented into plot-scale subgraphs, which were categorized into 3 growth stages. The panicle vision transformer (Panicle-ViT), which integrates a multipath vision transformer and replaces the Mask R-CNN backbone, accurately detects panicles. Additionally, the Res2Net50 architecture classified panicle types with 4 angles of 0°, 15°, 45°, and 90°. The results confirm that the performance of Plot-Seg is comparable to that of manual segmentation. Panicle-ViT outperforms the traditional Mask R-CNN across all the datasets, with the average precision at 50% intersection over union (AP50) improved by 3.5% to 20.5%. The PNpA quantification for the full dataset achieved superior performance, with a coefficient of determination (R 2) of 0.73 and a root mean square error (RMSE) of 28.3, and the overall panicle classification accuracy reached 94.8%. The proposed approach enhances operational efficiency and automates the process from plot cropping to PNpA prediction, which is promising for accelerating the selection of desired traits in rice breeding.

单位面积上的圆锥花序数(PNpA)是影响水稻产量的关键因素之一。准确量化 PNpA 对培育高产水稻品种至关重要。以往的研究基于固定观测平台或无人机(UAV)的近距离传感。这些研究中生成的近冠层图像效率低下,图像处理流程复杂,需要人工进行图像裁剪和标注。本研究旨在开发一种基于无人机图像的自动化、高通量的田块分割和圆锥花序数量定量方法,以及一种针对不同圆锥花序类型的新型分类方法,从而提高地块层面的 PNpA 定量。水稻冠层的 RGB 图像是在 15 米高空有效捕获的,然后通过基于掩膜区域的卷积神经网络(掩膜 R-CNN)进行图像拼接和地块边界识别。然后将图像分割成地块尺度的子图,并将其分为 3 个生长阶段。圆锥花序视觉变换器(Panicle-ViT)集成了多路径视觉变换器,取代了掩膜 R-CNN 骨干网络,可准确检测圆锥花序。此外,Res2Net50 架构还对 0°、15°、45° 和 90° 四种角度的圆锥花序类型进行了分类。结果证实,Plot-Seg 的性能可与人工分割相媲美。在所有数据集上,Panicle-ViT 的表现都优于传统的 Mask R-CNN,50% 交集大于联合(AP50)时的平均精度提高了 3.5% 至 20.5%。全数据集的 PNpA 量化取得了优异的性能,决定系数(R 2)为 0.73,均方根误差(RMSE)为 28.3,整体圆锥花序分类准确率达到 94.8%。所提出的方法提高了操作效率,实现了从小区种植到 PNpA 预测过程的自动化,有望加速水稻育种中理想性状的选择。
{"title":"Phenotyping of Panicle Number and Shape in Rice Breeding Materials Based on Unmanned Aerial Vehicle Imagery.","authors":"Xuqi Lu, Yutao Shen, Jiayang Xie, Xin Yang, Qingyao Shu, Song Chen, Zhihui Shen, Haiyan Cen","doi":"10.34133/plantphenomics.0265","DOIUrl":"https://doi.org/10.34133/plantphenomics.0265","url":null,"abstract":"<p><p>The number of panicles per unit area (PNpA) is one of the key factors contributing to the grain yield of rice crops. Accurate PNpA quantification is vital for breeding high-yield rice cultivars. Previous studies were based on proximal sensing with fixed observation platforms or unmanned aerial vehicles (UAVs). The near-canopy images produced in these studies suffer from inefficiency and complex image processing pipelines that require manual image cropping and annotation. This study aims to develop an automated, high-throughput UAV imagery-based approach for field plot segmentation and panicle number quantification, along with a novel classification method for different panicle types, enhancing PNpA quantification at the plot level. RGB images of the rice canopy were efficiently captured at an altitude of 15 m, followed by image stitching and plot boundary recognition via a mask region-based convolutional neural network (Mask R-CNN). The images were then segmented into plot-scale subgraphs, which were categorized into 3 growth stages. The panicle vision transformer (Panicle-ViT), which integrates a multipath vision transformer and replaces the Mask R-CNN backbone, accurately detects panicles. Additionally, the Res2Net50 architecture classified panicle types with 4 angles of 0°, 15°, 45°, and 90°. The results confirm that the performance of Plot-Seg is comparable to that of manual segmentation. Panicle-ViT outperforms the traditional Mask R-CNN across all the datasets, with the average precision at 50% intersection over union (AP<sub>50</sub>) improved by 3.5% to 20.5%. The PNpA quantification for the full dataset achieved superior performance, with a coefficient of determination (<i>R</i> <sup>2</sup>) of 0.73 and a root mean square error (RMSE) of 28.3, and the overall panicle classification accuracy reached 94.8%. The proposed approach enhances operational efficiency and automates the process from plot cropping to PNpA prediction, which is promising for accelerating the selection of desired traits in rice breeding.</p>","PeriodicalId":20318,"journal":{"name":"Plant Phenomics","volume":"6 ","pages":"0265"},"PeriodicalIF":7.6,"publicationDate":"2024-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11499587/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142506483","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Evaluating the Influence of Row Orientation and Crown Morphology on Growth of Pinus taeda L. with Drone-Based Airborne Laser Scanning. 利用无人机机载激光扫描技术评估行向和树冠形态对尾叶松生长的影响
IF 7.6 1区 农林科学 Q1 AGRONOMY Pub Date : 2024-10-23 eCollection Date: 2024-01-01 DOI: 10.34133/plantphenomics.0264
Matthew J Sumnall, David R Carter, Timothy J Albaugh, Rachel L Cook, Otávio C Campoe, Rafael A Rubilar

The tree crown's directionality of growth may be an indicator of how aggressive the tree is in terms of foraging for light. Airborne drone laser scanning (DLS) has been used to accurately classify individual tree crowns (ITCs) and derive size metrics related to the crown. We compare ITCs among 6 genotypes exhibiting different crown architectures in managed loblolly pine (Pinus taeda L.) in the United States. DLS data are classified into ITC objects, and we present novel methods to calculate ITC shape metrics. Tree stems are located using (a) model-based clustering and (b) weighting cluster-based size. We generated ITC shape metrics using 3-dimensional (3D) alphashapes in 2 DLS acquisitions of the same location, 4 years apart. Crown horizontal distance from the stem was estimated at multiple heights, in addition to calculating 3D volume in specific azimuths. Crown morphologies varied significantly (P < 0.05) spatially, temporally, and among the 6 genotypes. Most genotypes exhibited larger crown volumes facing south (150° to 173°). We found that crown asymmetries were consistent with (a) the direction of solar radiation, (b) the spatial arrangement and proximity of the neighboring crowns, and (c) genotype. Larger crowns were consistent with larger increases in stem volume, but that increases in the southern portions of crown volume were consistent with larger stem volume increases, than in the north. This finding suggests that row orientation could influence stem growth rates in plantations, particularly impacting earlier development. These differences can potentially reduce over time, especially if stands are not thinned in a timely manner once canopy growing space has diminished.

树冠的生长方向性可能是树木觅光积极性的指标。机载无人机激光扫描(DLS)已被用于对单个树冠(ITC)进行精确分类,并得出与树冠相关的尺寸指标。我们对美国受管理的龙柏(Pinus taeda L.)中表现出不同树冠结构的 6 种基因型的 ITC 进行了比较。我们将 DLS 数据分类为 ITC 对象,并提出了计算 ITC 形状指标的新方法。使用(a)基于模型的聚类和(b)基于聚类大小的加权对树干进行定位。我们使用相隔 4 年对同一地点进行的 2 次 DLS 采集中的三维 (3D) 字母形状生成 ITC 形状度量。除了计算特定方位角的三维体积外,还从多个高度估算了树冠与树干的水平距离。树冠形态在空间、时间和 6 个基因型之间都有显著差异(P < 0.05)。大多数基因型朝南的树冠体积较大(150°至173°)。我们发现,树冠的不对称性与以下因素有关:(a)太阳辐射的方向;(b)相邻树冠的空间排列和距离;(c)基因型。较大的树冠与较大的茎量增加相一致,但树冠体积南部的增加与茎量的增加相一致,而北部的增加与茎量的增加相一致。这一发现表明,行向可能会影响种植园的茎干生长率,特别是影响早期发育。随着时间的推移,这些差异可能会缩小,特别是如果树冠生长空间缩小后,林分没有及时疏伐。
{"title":"Evaluating the Influence of Row Orientation and Crown Morphology on Growth of <i>Pinus taeda L</i>. with Drone-Based Airborne Laser Scanning.","authors":"Matthew J Sumnall, David R Carter, Timothy J Albaugh, Rachel L Cook, Otávio C Campoe, Rafael A Rubilar","doi":"10.34133/plantphenomics.0264","DOIUrl":"https://doi.org/10.34133/plantphenomics.0264","url":null,"abstract":"<p><p>The tree crown's directionality of growth may be an indicator of how aggressive the tree is in terms of foraging for light. Airborne drone laser scanning (DLS) has been used to accurately classify individual tree crowns (ITCs) and derive size metrics related to the crown. We compare ITCs among 6 genotypes exhibiting different crown architectures in managed loblolly pine (<i>Pinus taeda L.</i>) in the United States. DLS data are classified into ITC objects, and we present novel methods to calculate ITC shape metrics. Tree stems are located using (a) model-based clustering and (b) weighting cluster-based size. We generated ITC shape metrics using 3-dimensional (3D) alphashapes in 2 DLS acquisitions of the same location, 4 years apart. Crown horizontal distance from the stem was estimated at multiple heights, in addition to calculating 3D volume in specific azimuths. Crown morphologies varied significantly (<i>P</i> < 0.05) spatially, temporally, and among the 6 genotypes. Most genotypes exhibited larger crown volumes facing south (150° to 173°). We found that crown asymmetries were consistent with (a) the direction of solar radiation, (b) the spatial arrangement and proximity of the neighboring crowns, and (c) genotype. Larger crowns were consistent with larger increases in stem volume, but that increases in the southern portions of crown volume were consistent with larger stem volume increases, than in the north. This finding suggests that row orientation could influence stem growth rates in plantations, particularly impacting earlier development. These differences can potentially reduce over time, especially if stands are not thinned in a timely manner once canopy growing space has diminished.</p>","PeriodicalId":20318,"journal":{"name":"Plant Phenomics","volume":"6 ","pages":"0264"},"PeriodicalIF":7.6,"publicationDate":"2024-10-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11496608/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142506482","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Cucumber Seedling Segmentation Network Based on a Multiview Geometric Graph Encoder from 3D Point Clouds. 基于三维点云多视角几何图编码器的黄瓜幼苗分割网络。
IF 7.6 1区 农林科学 Q1 AGRONOMY Pub Date : 2024-10-16 eCollection Date: 2024-01-01 DOI: 10.34133/plantphenomics.0254
Yonglong Zhang, Yaling Xie, Jialuo Zhou, Xiangying Xu, Minmin Miao

Plant phenotyping plays a pivotal role in observing and comprehending the growth and development of plants. In phenotyping, plant organ segmentation based on 3D point clouds has garnered increasing attention in recent years. However, using only the geometric relationship features of Euclidean space still cannot accurately segment and measure plants. To this end, we mine more geometric features and propose a segmentation network based on a multiview geometric graph encoder, called SN-MGGE. First, we construct a point cloud acquisition platform to obtain the cucumber seedling point cloud dataset, and employ CloudCompare software to annotate the point cloud data. The GGE module is then designed to generate the point features, including the geometric relationships and geometric shape structure, via a graph encoder over the Euclidean and hyperbolic spaces. Finally, the semantic segmentation results are obtained via a downsampling operation and multilayer perceptron. Extensive experiments on a cucumber seedling dataset clearly show that our proposed SN-MGGE network outperforms several mainstream segmentation networks (e.g., PointNet++, AGConv, and PointMLP), achieving mIoU and OA values of 94.90% and 97.43%, respectively. On the basis of the segmentation results, 4 phenotypic parameters (i.e., plant height, leaf length, leaf width, and leaf area) are extracted through the K-means clustering method; these parameters are very close to the ground truth, and the R 2 values reach 0.98, 0.96, 0.97, and 0.97, respectively. Furthermore, an ablation study and a generalization experiment also show that the SN-MGGE network is robust and extensive.

植物表型分析在观察和理解植物的生长发育过程中起着举足轻重的作用。在表型分析中,基于三维点云的植物器官分割近年来受到越来越多的关注。然而,仅利用欧几里得空间的几何关系特征仍然无法准确地分割和测量植物。为此,我们挖掘了更多的几何特征,并提出了一种基于多视图几何图编码器的分割网络,称为 SN-MGGE。首先,我们构建了一个点云采集平台来获取黄瓜幼苗点云数据集,并使用 CloudCompare 软件对点云数据进行标注。然后设计 GGE 模块,通过欧几里得空间和双曲空间上的图编码器生成点特征,包括几何关系和几何形状结构。最后,通过下采样操作和多层感知器获得语义分割结果。在黄瓜幼苗数据集上进行的大量实验清楚地表明,我们提出的 SN-MGGE 网络优于几种主流分割网络(如 PointNet++、AGConv 和 PointMLP),mIoU 和 OA 值分别达到 94.90% 和 97.43%。在分割结果的基础上,通过 K-means 聚类方法提取了 4 个表型参数(即株高、叶长、叶宽和叶面积);这些参数与地面实况非常接近,R 2 值分别达到 0.98、0.96、0.97 和 0.97。此外,一项消融研究和一项泛化实验也表明,SN-MGGE 网络具有鲁棒性和广泛性。
{"title":"Cucumber Seedling Segmentation Network Based on a Multiview Geometric Graph Encoder from 3D Point Clouds.","authors":"Yonglong Zhang, Yaling Xie, Jialuo Zhou, Xiangying Xu, Minmin Miao","doi":"10.34133/plantphenomics.0254","DOIUrl":"https://doi.org/10.34133/plantphenomics.0254","url":null,"abstract":"<p><p>Plant phenotyping plays a pivotal role in observing and comprehending the growth and development of plants. In phenotyping, plant organ segmentation based on 3D point clouds has garnered increasing attention in recent years. However, using only the geometric relationship features of Euclidean space still cannot accurately segment and measure plants. To this end, we mine more geometric features and propose a segmentation network based on a multiview geometric graph encoder, called SN-MGGE. First, we construct a point cloud acquisition platform to obtain the cucumber seedling point cloud dataset, and employ CloudCompare software to annotate the point cloud data. The GGE module is then designed to generate the point features, including the geometric relationships and geometric shape structure, via a graph encoder over the Euclidean and hyperbolic spaces. Finally, the semantic segmentation results are obtained via a downsampling operation and multilayer perceptron. Extensive experiments on a cucumber seedling dataset clearly show that our proposed SN-MGGE network outperforms several mainstream segmentation networks (e.g., PointNet++, AGConv, and PointMLP), achieving mIoU and OA values of 94.90% and 97.43%, respectively. On the basis of the segmentation results, 4 phenotypic parameters (i.e., plant height, leaf length, leaf width, and leaf area) are extracted through the K-means clustering method; these parameters are very close to the ground truth, and the <i>R</i> <sup>2</sup> values reach 0.98, 0.96, 0.97, and 0.97, respectively. Furthermore, an ablation study and a generalization experiment also show that the SN-MGGE network is robust and extensive.</p>","PeriodicalId":20318,"journal":{"name":"Plant Phenomics","volume":"6 ","pages":"0254"},"PeriodicalIF":7.6,"publicationDate":"2024-10-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11480588/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142472839","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
GSP-AI: An AI-Powered Platform for Identifying Key Growth Stages and the Vegetative-to-Reproductive Transition in Wheat Using Trilateral Drone Imagery and Meteorological Data. GSP-AI:利用三边无人机图像和气象数据识别小麦关键生长阶段和无性到生殖过渡的人工智能平台。
IF 7.6 1区 农林科学 Q1 AGRONOMY Pub Date : 2024-10-09 eCollection Date: 2024-01-01 DOI: 10.34133/plantphenomics.0255
Liyan Shen, Guohui Ding, Robert Jackson, Mujahid Ali, Shuchen Liu, Arthur Mitchell, Yeyin Shi, Xuqi Lu, Jie Dai, Greg Deakin, Katherine Frels, Haiyan Cen, Yu-Feng Ge, Ji Zhou

Wheat (Triticum aestivum) is one of the most important staple crops worldwide. To ensure its global supply, the timing and duration of its growth cycle needs to be closely monitored in the field so that necessary crop management activities can be arranged in a timely manner. Also, breeders and plant researchers need to evaluate growth stages (GSs) for tens of thousands of genotypes at the plot level, at different sites and across multiple seasons. These indicate the importance of providing a reliable and scalable toolkit to address the challenge so that the plot-level assessment of GS can be successfully conducted for different objectives in plant research. Here, we present a multimodal deep learning model called GSP-AI, capable of identifying key GSs and predicting the vegetative-to-reproductive transition (i.e., flowering days) in wheat based on drone-collected canopy images and multiseasonal climatic datasets. In the study, we first established an open Wheat Growth Stage Prediction (WGSP) dataset, consisting of 70,410 annotated images collected from 54 varieties cultivated in China, 109 in the United Kingdom, and 100 in the United States together with key climatic factors. Then, we built an effective learning architecture based on Res2Net and long short-term memory (LSTM) to learn canopy-level vision features and patterns of climatic changes between 2018 and 2021 growing seasons. Utilizing the model, we achieved an overall accuracy of 91.2% in identifying key GS and an average root mean square error (RMSE) of 5.6 d for forecasting the flowering days compared with manual scoring. We further tested and improved the GSP-AI model with high-resolution smartphone images collected in the 2021/2022 season in China, through which the accuracy of the model was enhanced to 93.4% for GS and RMSE reduced to 4.7 d for the flowering prediction. As a result, we believe that our work demonstrates a valuable advance to inform breeders and growers regarding the timing and duration of key plant growth and development phases at the plot level, facilitating them to conduct more effective crop selection and make agronomic decisions under complicated field conditions for wheat improvement.

小麦(Triticum aestivum)是全球最重要的主粮作物之一。为确保其全球供应,需要在田间密切监测其生长周期的时间和持续时间,以便及时安排必要的作物管理活动。此外,育种人员和植物研究人员还需要在不同地点和多个季节对数以万计的基因型进行小区级生长阶段(GSs)评估。这些都表明,提供一个可靠且可扩展的工具包来应对这一挑战非常重要,这样就能成功地针对植物研究的不同目标进行小区级 GS 评估。在此,我们提出了一种名为 GSP-AI 的多模态深度学习模型,该模型能够基于无人机采集的冠层图像和多季节气候数据集,识别关键的 GSs 并预测小麦的无性到生殖过渡(即开花天数)。在这项研究中,我们首先建立了一个开放的小麦生长阶段预测(WGSP)数据集,该数据集包括从中国种植的54个品种、英国种植的109个品种和美国种植的100个品种中收集的70,410张注释图像以及关键气候因子。然后,我们建立了一个基于 Res2Net 和长短期记忆(LSTM)的有效学习架构,以学习冠层视觉特征和 2018 年至 2021 年生长季的气候变化规律。利用该模型,我们在识别关键GS方面取得了91.2%的总体准确率,与人工评分相比,花期预测的平均均方根误差(RMSE)为5.6 d。我们利用在中国 2021/2022 年采集的高分辨率智能手机图像进一步测试和改进了 GSP-AI 模型,通过该模型,GS 的准确率提高到 93.4%,花期预测的均方根误差降低到 4.7 d。因此,我们相信,我们的工作展示了一项宝贵的进步,为育种者和种植者提供了有关地块层面植物生长发育关键阶段的时间和持续时间的信息,有助于他们在复杂的田间条件下进行更有效的作物选择和农艺决策,从而改良小麦。
{"title":"GSP-AI: An AI-Powered Platform for Identifying Key Growth Stages and the Vegetative-to-Reproductive Transition in Wheat Using Trilateral Drone Imagery and Meteorological Data.","authors":"Liyan Shen, Guohui Ding, Robert Jackson, Mujahid Ali, Shuchen Liu, Arthur Mitchell, Yeyin Shi, Xuqi Lu, Jie Dai, Greg Deakin, Katherine Frels, Haiyan Cen, Yu-Feng Ge, Ji Zhou","doi":"10.34133/plantphenomics.0255","DOIUrl":"10.34133/plantphenomics.0255","url":null,"abstract":"<p><p>Wheat (<i>Triticum aestivum</i>) is one of the most important staple crops worldwide. To ensure its global supply, the timing and duration of its growth cycle needs to be closely monitored in the field so that necessary crop management activities can be arranged in a timely manner. Also, breeders and plant researchers need to evaluate growth stages (GSs) for tens of thousands of genotypes at the plot level, at different sites and across multiple seasons. These indicate the importance of providing a reliable and scalable toolkit to address the challenge so that the plot-level assessment of GS can be successfully conducted for different objectives in plant research. Here, we present a multimodal deep learning model called GSP-AI, capable of identifying key GSs and predicting the vegetative-to-reproductive transition (i.e., flowering days) in wheat based on drone-collected canopy images and multiseasonal climatic datasets. In the study, we first established an open Wheat Growth Stage Prediction (WGSP) dataset, consisting of 70,410 annotated images collected from 54 varieties cultivated in China, 109 in the United Kingdom, and 100 in the United States together with key climatic factors. Then, we built an effective learning architecture based on Res2Net and long short-term memory (LSTM) to learn canopy-level vision features and patterns of climatic changes between 2018 and 2021 growing seasons. Utilizing the model, we achieved an overall accuracy of 91.2% in identifying key GS and an average root mean square error (RMSE) of 5.6 d for forecasting the flowering days compared with manual scoring. We further tested and improved the GSP-AI model with high-resolution smartphone images collected in the 2021/2022 season in China, through which the accuracy of the model was enhanced to 93.4% for GS and RMSE reduced to 4.7 d for the flowering prediction. As a result, we believe that our work demonstrates a valuable advance to inform breeders and growers regarding the timing and duration of key plant growth and development phases at the plot level, facilitating them to conduct more effective crop selection and make agronomic decisions under complicated field conditions for wheat improvement.</p>","PeriodicalId":20318,"journal":{"name":"Plant Phenomics","volume":"6 ","pages":"0255"},"PeriodicalIF":7.6,"publicationDate":"2024-10-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11462051/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142392656","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
MLG-YOLO: A Model for Real-Time Accurate Detection and Localization of Winter Jujube in Complex Structured Orchard Environments. MLG-YOLO:在结构复杂的果园环境中实时准确检测和定位冬枣的模型。
IF 7.6 1区 农林科学 Q1 AGRONOMY Pub Date : 2024-09-23 eCollection Date: 2024-01-01 DOI: 10.34133/plantphenomics.0258
Chenhao Yu, Xiaoyi Shi, Wenkai Luo, Junzhe Feng, Zhouzhou Zheng, Ayanori Yorozu, Yaohua Hu, Jiapan Guo

Our research focuses on winter jujube trees and is conducted in a greenhouse environment in a structured orchard to effectively control various growth conditions. The development of a robotic system for winter jujube harvesting is crucial for achieving mechanized harvesting. Harvesting winter jujubes efficiently requires accurate detection and location. To address this issue, we proposed a winter jujube detection and localization method based on the MobileVit-Large selective kernel-GSConv-YOLO (MLG-YOLO) model. First, a winter jujube dataset is constructed to comprise various scenarios of lighting conditions and leaf obstructions to train the model. Subsequently, the MLG-YOLO model based on YOLOv8n is proposed, with improvements including the incorporation of MobileViT to reconstruct the backbone and keep the model more lightweight. The neck is enhanced with LSKblock to capture broader contextual information, and the lightweight convolutional technology GSConv is introduced to further improve the detection accuracy. Finally, a 3-dimensional localization method combining MLG-YOLO with RGB-D cameras is proposed. Through ablation studies, comparative experiments, 3-dimensional localization error tests, and full-scale tree detection tests in laboratory environments and structured orchard environments, the effectiveness of the MLG-YOLO model in detecting and locating winter jujubes is confirmed. With MLG-YOLO, the mAP increases by 3.50%, while the number of parameters is reduced by 61.03% in comparison with the baseline YOLOv8n model. Compared with mainstream object detection models, MLG-YOLO excels in both detection accuracy and model size, with a mAP of 92.70%, a precision of 86.80%, a recall of 84.50%, and a model size of only 2.52 MB. The average detection accuracy in the laboratory environmental testing of winter jujube reached 100%, and the structured orchard environmental accuracy reached 92.82%. The absolute positioning errors in the X, Y, and Z directions are 4.20, 4.70, and 3.90 mm, respectively. This method enables accurate detection and localization of winter jujubes, providing technical support for winter jujube harvesting robots.

我们的研究重点是冬枣树,在温室环境下的结构化果园中进行,以有效控制各种生长条件。开发冬枣收获机器人系统对于实现机械化收获至关重要。高效收获冬枣需要准确的检测和定位。为解决这一问题,我们提出了一种基于 MobileVit-Large selective kernel-GSConv-YOLO (MLG-YOLO) 模型的冬枣检测和定位方法。首先,构建一个包含各种光照条件和叶片遮挡情况的冬枣数据集来训练模型。随后,提出了基于 YOLOv8n 的 MLG-YOLO 模型,并对其进行了改进,包括加入 MobileViT 来重建骨干网,使模型更加轻量级。利用 LSKblock 增强了颈部,以捕捉更广泛的上下文信息,并引入了轻量级卷积技术 GSConv,以进一步提高检测精度。最后,提出了一种结合 MLG-YOLO 和 RGB-D 相机的三维定位方法。通过在实验室环境和结构化果园环境中进行的烧蚀研究、对比实验、三维定位误差测试和全尺寸树木检测测试,证实了 MLG-YOLO 模型在检测和定位冬枣方面的有效性。与基准 YOLOv8n 模型相比,MLG-YOLO 的 mAP 增加了 3.50%,参数数量减少了 61.03%。与主流的物体检测模型相比,MLG-YOLO 在检测准确率和模型大小方面都表现出色,其 mAP 为 92.70%,准确率为 86.80%,召回率为 84.50%,模型大小仅为 2.52 MB。冬枣实验室环境测试的平均检测准确率达到 100%,结构化果园环境准确率达到 92.82%。X、Y和Z方向的绝对定位误差分别为4.20、4.70和3.90毫米。该方法实现了冬枣的精确检测和定位,为冬枣收获机器人提供了技术支持。
{"title":"MLG-YOLO: A Model for Real-Time Accurate Detection and Localization of Winter Jujube in Complex Structured Orchard Environments.","authors":"Chenhao Yu, Xiaoyi Shi, Wenkai Luo, Junzhe Feng, Zhouzhou Zheng, Ayanori Yorozu, Yaohua Hu, Jiapan Guo","doi":"10.34133/plantphenomics.0258","DOIUrl":"10.34133/plantphenomics.0258","url":null,"abstract":"<p><p>Our research focuses on winter jujube trees and is conducted in a greenhouse environment in a structured orchard to effectively control various growth conditions. The development of a robotic system for winter jujube harvesting is crucial for achieving mechanized harvesting. Harvesting winter jujubes efficiently requires accurate detection and location. To address this issue, we proposed a winter jujube detection and localization method based on the MobileVit-Large selective kernel-GSConv-YOLO (MLG-YOLO) model. First, a winter jujube dataset is constructed to comprise various scenarios of lighting conditions and leaf obstructions to train the model. Subsequently, the MLG-YOLO model based on YOLOv8n is proposed, with improvements including the incorporation of MobileViT to reconstruct the backbone and keep the model more lightweight. The neck is enhanced with LSKblock to capture broader contextual information, and the lightweight convolutional technology GSConv is introduced to further improve the detection accuracy. Finally, a 3-dimensional localization method combining MLG-YOLO with RGB-D cameras is proposed. Through ablation studies, comparative experiments, 3-dimensional localization error tests, and full-scale tree detection tests in laboratory environments and structured orchard environments, the effectiveness of the MLG-YOLO model in detecting and locating winter jujubes is confirmed. With MLG-YOLO, the mAP increases by 3.50%, while the number of parameters is reduced by 61.03% in comparison with the baseline YOLOv8n model. Compared with mainstream object detection models, MLG-YOLO excels in both detection accuracy and model size, with a mAP of 92.70%, a precision of 86.80%, a recall of 84.50%, and a model size of only 2.52 MB. The average detection accuracy in the laboratory environmental testing of winter jujube reached 100%, and the structured orchard environmental accuracy reached 92.82%. The absolute positioning errors in the <i>X</i>, <i>Y</i>, and <i>Z</i> directions are 4.20, 4.70, and 3.90 mm, respectively. This method enables accurate detection and localization of winter jujubes, providing technical support for winter jujube harvesting robots.</p>","PeriodicalId":20318,"journal":{"name":"Plant Phenomics","volume":"6 ","pages":"0258"},"PeriodicalIF":7.6,"publicationDate":"2024-09-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11418275/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142308443","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Plant Phenomics
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1