首页 > 最新文献

Plant Phenomics最新文献

英文 中文
Multi-Scale Attention Network for Vertical Seed Distribution in Soybean Breeding Fields. 用于大豆育种田垂直种子分配的多尺度注意力网络。
IF 7.6 1区 农林科学 Q1 AGRONOMY Pub Date : 2024-11-10 eCollection Date: 2024-01-01 DOI: 10.34133/plantphenomics.0260
Tang Li, Pieter M Blok, James Burridge, Akito Kaga, Wei Guo

The increase in the global population is leading to a doubling of the demand for protein. Soybean (Glycine max), a key contributor to global plant-based protein supplies, requires ongoing yield enhancements to keep pace with increasing demand. Precise, on-plant seed counting and localization may catalyze breeding selection of shoot architectures and seed localization patterns related to superior performance in high planting density and contribute to increased yield. Traditional manual counting and localization methods are labor-intensive and prone to error, necessitating more efficient approaches for yield prediction and seed distribution analysis. To solve this, we propose MSANet: a novel deep learning framework tailored for counting and localization of soybean seeds on mature field-grown soy plants. A multi-scale attention map mechanism was applied to maximize model performance in seed counting and localization in soybean breeding fields. We compared our model with a previous state-of-the-art model using the benchmark dataset and an enlarged dataset, including various soybean genotypes. Our model outperforms previous state-of-the-art methods on all datasets across various soybean genotypes on both counting and localization tasks. Furthermore, our model also performed well on in-canopy 360° video, dramatically increasing data collection efficiency. We also propose a technique that enables previously inaccessible insights into the phenotypic and genetic diversity of single plant vertical seed distribution, which may accelerate the breeding process. To accelerate further research in this domain, we have made our dataset and software publicly available: https://github.com/UTokyo-FieldPhenomics-Lab/MSANet.

全球人口的增长导致对蛋白质的需求翻了一番。大豆(Glycine max)是全球植物蛋白供应的主要来源,需要不断提高产量以满足日益增长的需求。精确的植株上种子计数和定位可促进育种选择与高种植密度下优异性能相关的芽结构和种子定位模式,并有助于提高产量。传统的人工计数和定位方法耗费大量人力且容易出错,因此需要更高效的方法来进行产量预测和种子分布分析。为了解决这个问题,我们提出了 MSANet:一种新颖的深度学习框架,专为大豆成熟田间种植植株上的大豆种子计数和定位而定制。我们采用了多尺度注意力图机制,以最大限度地提高模型在大豆育种田种子计数和定位中的性能。我们使用基准数据集和包括各种大豆基因型在内的扩大数据集,将我们的模型与之前最先进的模型进行了比较。在各种大豆基因型的所有数据集上,我们的模型在计数和定位任务上都优于之前的先进方法。此外,我们的模型在树冠内 360° 视频上也表现出色,大大提高了数据收集效率。我们还提出了一种技术,能让人们深入了解以前无法获得的单株垂直种子分布的表型和遗传多样性,从而加快育种进程。为了加快该领域的进一步研究,我们已将数据集和软件公开:https://github.com/UTokyo-FieldPhenomics-Lab/MSANet。
{"title":"Multi-Scale Attention Network for Vertical Seed Distribution in Soybean Breeding Fields.","authors":"Tang Li, Pieter M Blok, James Burridge, Akito Kaga, Wei Guo","doi":"10.34133/plantphenomics.0260","DOIUrl":"https://doi.org/10.34133/plantphenomics.0260","url":null,"abstract":"<p><p>The increase in the global population is leading to a doubling of the demand for protein. Soybean (<i>Glycine max</i>), a key contributor to global plant-based protein supplies, requires ongoing yield enhancements to keep pace with increasing demand. Precise, on-plant seed counting and localization may catalyze breeding selection of shoot architectures and seed localization patterns related to superior performance in high planting density and contribute to increased yield. Traditional manual counting and localization methods are labor-intensive and prone to error, necessitating more efficient approaches for yield prediction and seed distribution analysis. To solve this, we propose MSANet: a novel deep learning framework tailored for counting and localization of soybean seeds on mature field-grown soy plants. A multi-scale attention map mechanism was applied to maximize model performance in seed counting and localization in soybean breeding fields. We compared our model with a previous state-of-the-art model using the benchmark dataset and an enlarged dataset, including various soybean genotypes. Our model outperforms previous state-of-the-art methods on all datasets across various soybean genotypes on both counting and localization tasks. Furthermore, our model also performed well on in-canopy 360° video, dramatically increasing data collection efficiency. We also propose a technique that enables previously inaccessible insights into the phenotypic and genetic diversity of single plant vertical seed distribution, which may accelerate the breeding process. To accelerate further research in this domain, we have made our dataset and software publicly available: https://github.com/UTokyo-FieldPhenomics-Lab/MSANet.</p>","PeriodicalId":20318,"journal":{"name":"Plant Phenomics","volume":"6 ","pages":"0260"},"PeriodicalIF":7.6,"publicationDate":"2024-11-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11550408/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142625881","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Counting Canola: Toward Generalizable Aerial Plant Detection Models. 计算油菜籽:建立可通用的空中植物探测模型
IF 7.6 1区 农林科学 Q1 AGRONOMY Pub Date : 2024-11-08 eCollection Date: 2024-01-01 DOI: 10.34133/plantphenomics.0268
Erik Andvaag, Kaylie Krys, Steven J Shirtliffe, Ian Stavness

Plant population counts are highly valued by crop producers as important early-season indicators of field health. Traditionally, emergence rate estimates have been acquired through manual counting, an approach that is labor-intensive and relies heavily on sampling techniques. By applying deep learning-based object detection models to aerial field imagery, accurate plant population counts can be obtained for much larger areas of a field. Unfortunately, current detection models often perform poorly when they are faced with image conditions that do not closely resemble the data found in their training sets. In this paper, we explore how specific facets of a plant detector's training set can affect its ability to generalize to unseen image sets. In particular, we examine how a plant detection model's generalizability is influenced by the size, diversity, and quality of its training data. Our experiments show that the gap between in-distribution and out-of-distribution performance cannot be closed by merely increasing the size of a model's training set. We also demonstrate the importance of training set diversity in producing generalizable models, and show how different types of annotation noise can elicit different model behaviors in out-of-distribution test sets. We conduct our investigations with a large and diverse dataset of canola field imagery that we assembled over several years. We also present a new web tool, Canola Counter, which is specifically designed for remote-sensed aerial plant detection tasks. We use the Canola Counter tool to prepare our annotated canola seedling dataset and conduct our experiments. Both our dataset and web tool are publicly available.

作物生产者非常重视植物数量计数,将其作为田间健康状况的重要早季指标。传统上,出苗率估算是通过人工计数获得的,这种方法劳动密集,且严重依赖采样技术。通过将基于深度学习的目标检测模型应用于航空田间图像,可以获得大得多的田间精确植物数量计数。遗憾的是,当前的检测模型在面对与其训练集中的数据并不十分相似的图像条件时,往往表现不佳。在本文中,我们将探讨植物检测器训练集的特定方面如何影响其对未见图像集的泛化能力。特别是,我们研究了植物检测模型的泛化能力如何受到其训练数据的大小、多样性和质量的影响。我们的实验表明,仅仅增加模型训练集的大小并不能缩小分布内和分布外性能之间的差距。我们还证明了训练集多样性在生成可泛化模型方面的重要性,并展示了不同类型的注释噪声如何在分布外测试集中引发不同的模型行为。我们利用数年来收集的大量不同的油菜花田图像数据集进行了研究。我们还介绍了一种新的网络工具--油菜花计数器,该工具专为遥感航空植物检测任务而设计。我们使用 Canola Counter 工具来准备油菜籽幼苗注释数据集并进行实验。我们的数据集和网络工具均可公开获取。
{"title":"Counting Canola: Toward Generalizable Aerial Plant Detection Models.","authors":"Erik Andvaag, Kaylie Krys, Steven J Shirtliffe, Ian Stavness","doi":"10.34133/plantphenomics.0268","DOIUrl":"https://doi.org/10.34133/plantphenomics.0268","url":null,"abstract":"<p><p>Plant population counts are highly valued by crop producers as important early-season indicators of field health. Traditionally, emergence rate estimates have been acquired through manual counting, an approach that is labor-intensive and relies heavily on sampling techniques. By applying deep learning-based object detection models to aerial field imagery, accurate plant population counts can be obtained for much larger areas of a field. Unfortunately, current detection models often perform poorly when they are faced with image conditions that do not closely resemble the data found in their training sets. In this paper, we explore how specific facets of a plant detector's training set can affect its ability to generalize to unseen image sets. In particular, we examine how a plant detection model's generalizability is influenced by the size, diversity, and quality of its training data. Our experiments show that the gap between in-distribution and out-of-distribution performance cannot be closed by merely increasing the size of a model's training set. We also demonstrate the importance of training set diversity in producing generalizable models, and show how different types of annotation noise can elicit different model behaviors in out-of-distribution test sets. We conduct our investigations with a large and diverse dataset of canola field imagery that we assembled over several years. We also present a new web tool, Canola Counter, which is specifically designed for remote-sensed aerial plant detection tasks. We use the Canola Counter tool to prepare our annotated canola seedling dataset and conduct our experiments. Both our dataset and web tool are publicly available.</p>","PeriodicalId":20318,"journal":{"name":"Plant Phenomics","volume":"6 ","pages":"0268"},"PeriodicalIF":7.6,"publicationDate":"2024-11-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11543947/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142625923","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Phenotyping of Panicle Number and Shape in Rice Breeding Materials Based on Unmanned Aerial Vehicle Imagery. 基于无人飞行器图像的水稻育种材料圆锥花序数量和形状表型分析
IF 7.6 1区 农林科学 Q1 AGRONOMY Pub Date : 2024-10-24 eCollection Date: 2024-01-01 DOI: 10.34133/plantphenomics.0265
Xuqi Lu, Yutao Shen, Jiayang Xie, Xin Yang, Qingyao Shu, Song Chen, Zhihui Shen, Haiyan Cen

The number of panicles per unit area (PNpA) is one of the key factors contributing to the grain yield of rice crops. Accurate PNpA quantification is vital for breeding high-yield rice cultivars. Previous studies were based on proximal sensing with fixed observation platforms or unmanned aerial vehicles (UAVs). The near-canopy images produced in these studies suffer from inefficiency and complex image processing pipelines that require manual image cropping and annotation. This study aims to develop an automated, high-throughput UAV imagery-based approach for field plot segmentation and panicle number quantification, along with a novel classification method for different panicle types, enhancing PNpA quantification at the plot level. RGB images of the rice canopy were efficiently captured at an altitude of 15 m, followed by image stitching and plot boundary recognition via a mask region-based convolutional neural network (Mask R-CNN). The images were then segmented into plot-scale subgraphs, which were categorized into 3 growth stages. The panicle vision transformer (Panicle-ViT), which integrates a multipath vision transformer and replaces the Mask R-CNN backbone, accurately detects panicles. Additionally, the Res2Net50 architecture classified panicle types with 4 angles of 0°, 15°, 45°, and 90°. The results confirm that the performance of Plot-Seg is comparable to that of manual segmentation. Panicle-ViT outperforms the traditional Mask R-CNN across all the datasets, with the average precision at 50% intersection over union (AP50) improved by 3.5% to 20.5%. The PNpA quantification for the full dataset achieved superior performance, with a coefficient of determination (R 2) of 0.73 and a root mean square error (RMSE) of 28.3, and the overall panicle classification accuracy reached 94.8%. The proposed approach enhances operational efficiency and automates the process from plot cropping to PNpA prediction, which is promising for accelerating the selection of desired traits in rice breeding.

单位面积上的圆锥花序数(PNpA)是影响水稻产量的关键因素之一。准确量化 PNpA 对培育高产水稻品种至关重要。以往的研究基于固定观测平台或无人机(UAV)的近距离传感。这些研究中生成的近冠层图像效率低下,图像处理流程复杂,需要人工进行图像裁剪和标注。本研究旨在开发一种基于无人机图像的自动化、高通量的田块分割和圆锥花序数量定量方法,以及一种针对不同圆锥花序类型的新型分类方法,从而提高地块层面的 PNpA 定量。水稻冠层的 RGB 图像是在 15 米高空有效捕获的,然后通过基于掩膜区域的卷积神经网络(掩膜 R-CNN)进行图像拼接和地块边界识别。然后将图像分割成地块尺度的子图,并将其分为 3 个生长阶段。圆锥花序视觉变换器(Panicle-ViT)集成了多路径视觉变换器,取代了掩膜 R-CNN 骨干网络,可准确检测圆锥花序。此外,Res2Net50 架构还对 0°、15°、45° 和 90° 四种角度的圆锥花序类型进行了分类。结果证实,Plot-Seg 的性能可与人工分割相媲美。在所有数据集上,Panicle-ViT 的表现都优于传统的 Mask R-CNN,50% 交集大于联合(AP50)时的平均精度提高了 3.5% 至 20.5%。全数据集的 PNpA 量化取得了优异的性能,决定系数(R 2)为 0.73,均方根误差(RMSE)为 28.3,整体圆锥花序分类准确率达到 94.8%。所提出的方法提高了操作效率,实现了从小区种植到 PNpA 预测过程的自动化,有望加速水稻育种中理想性状的选择。
{"title":"Phenotyping of Panicle Number and Shape in Rice Breeding Materials Based on Unmanned Aerial Vehicle Imagery.","authors":"Xuqi Lu, Yutao Shen, Jiayang Xie, Xin Yang, Qingyao Shu, Song Chen, Zhihui Shen, Haiyan Cen","doi":"10.34133/plantphenomics.0265","DOIUrl":"https://doi.org/10.34133/plantphenomics.0265","url":null,"abstract":"<p><p>The number of panicles per unit area (PNpA) is one of the key factors contributing to the grain yield of rice crops. Accurate PNpA quantification is vital for breeding high-yield rice cultivars. Previous studies were based on proximal sensing with fixed observation platforms or unmanned aerial vehicles (UAVs). The near-canopy images produced in these studies suffer from inefficiency and complex image processing pipelines that require manual image cropping and annotation. This study aims to develop an automated, high-throughput UAV imagery-based approach for field plot segmentation and panicle number quantification, along with a novel classification method for different panicle types, enhancing PNpA quantification at the plot level. RGB images of the rice canopy were efficiently captured at an altitude of 15 m, followed by image stitching and plot boundary recognition via a mask region-based convolutional neural network (Mask R-CNN). The images were then segmented into plot-scale subgraphs, which were categorized into 3 growth stages. The panicle vision transformer (Panicle-ViT), which integrates a multipath vision transformer and replaces the Mask R-CNN backbone, accurately detects panicles. Additionally, the Res2Net50 architecture classified panicle types with 4 angles of 0°, 15°, 45°, and 90°. The results confirm that the performance of Plot-Seg is comparable to that of manual segmentation. Panicle-ViT outperforms the traditional Mask R-CNN across all the datasets, with the average precision at 50% intersection over union (AP<sub>50</sub>) improved by 3.5% to 20.5%. The PNpA quantification for the full dataset achieved superior performance, with a coefficient of determination (<i>R</i> <sup>2</sup>) of 0.73 and a root mean square error (RMSE) of 28.3, and the overall panicle classification accuracy reached 94.8%. The proposed approach enhances operational efficiency and automates the process from plot cropping to PNpA prediction, which is promising for accelerating the selection of desired traits in rice breeding.</p>","PeriodicalId":20318,"journal":{"name":"Plant Phenomics","volume":"6 ","pages":"0265"},"PeriodicalIF":7.6,"publicationDate":"2024-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11499587/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142506483","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Evaluating the Influence of Row Orientation and Crown Morphology on Growth of Pinus taeda L. with Drone-Based Airborne Laser Scanning. 利用无人机机载激光扫描技术评估行向和树冠形态对尾叶松生长的影响
IF 7.6 1区 农林科学 Q1 AGRONOMY Pub Date : 2024-10-23 eCollection Date: 2024-01-01 DOI: 10.34133/plantphenomics.0264
Matthew J Sumnall, David R Carter, Timothy J Albaugh, Rachel L Cook, Otávio C Campoe, Rafael A Rubilar

The tree crown's directionality of growth may be an indicator of how aggressive the tree is in terms of foraging for light. Airborne drone laser scanning (DLS) has been used to accurately classify individual tree crowns (ITCs) and derive size metrics related to the crown. We compare ITCs among 6 genotypes exhibiting different crown architectures in managed loblolly pine (Pinus taeda L.) in the United States. DLS data are classified into ITC objects, and we present novel methods to calculate ITC shape metrics. Tree stems are located using (a) model-based clustering and (b) weighting cluster-based size. We generated ITC shape metrics using 3-dimensional (3D) alphashapes in 2 DLS acquisitions of the same location, 4 years apart. Crown horizontal distance from the stem was estimated at multiple heights, in addition to calculating 3D volume in specific azimuths. Crown morphologies varied significantly (P < 0.05) spatially, temporally, and among the 6 genotypes. Most genotypes exhibited larger crown volumes facing south (150° to 173°). We found that crown asymmetries were consistent with (a) the direction of solar radiation, (b) the spatial arrangement and proximity of the neighboring crowns, and (c) genotype. Larger crowns were consistent with larger increases in stem volume, but that increases in the southern portions of crown volume were consistent with larger stem volume increases, than in the north. This finding suggests that row orientation could influence stem growth rates in plantations, particularly impacting earlier development. These differences can potentially reduce over time, especially if stands are not thinned in a timely manner once canopy growing space has diminished.

树冠的生长方向性可能是树木觅光积极性的指标。机载无人机激光扫描(DLS)已被用于对单个树冠(ITC)进行精确分类,并得出与树冠相关的尺寸指标。我们对美国受管理的龙柏(Pinus taeda L.)中表现出不同树冠结构的 6 种基因型的 ITC 进行了比较。我们将 DLS 数据分类为 ITC 对象,并提出了计算 ITC 形状指标的新方法。使用(a)基于模型的聚类和(b)基于聚类大小的加权对树干进行定位。我们使用相隔 4 年对同一地点进行的 2 次 DLS 采集中的三维 (3D) 字母形状生成 ITC 形状度量。除了计算特定方位角的三维体积外,还从多个高度估算了树冠与树干的水平距离。树冠形态在空间、时间和 6 个基因型之间都有显著差异(P < 0.05)。大多数基因型朝南的树冠体积较大(150°至173°)。我们发现,树冠的不对称性与以下因素有关:(a)太阳辐射的方向;(b)相邻树冠的空间排列和距离;(c)基因型。较大的树冠与较大的茎量增加相一致,但树冠体积南部的增加与茎量的增加相一致,而北部的增加与茎量的增加相一致。这一发现表明,行向可能会影响种植园的茎干生长率,特别是影响早期发育。随着时间的推移,这些差异可能会缩小,特别是如果树冠生长空间缩小后,林分没有及时疏伐。
{"title":"Evaluating the Influence of Row Orientation and Crown Morphology on Growth of <i>Pinus taeda L</i>. with Drone-Based Airborne Laser Scanning.","authors":"Matthew J Sumnall, David R Carter, Timothy J Albaugh, Rachel L Cook, Otávio C Campoe, Rafael A Rubilar","doi":"10.34133/plantphenomics.0264","DOIUrl":"https://doi.org/10.34133/plantphenomics.0264","url":null,"abstract":"<p><p>The tree crown's directionality of growth may be an indicator of how aggressive the tree is in terms of foraging for light. Airborne drone laser scanning (DLS) has been used to accurately classify individual tree crowns (ITCs) and derive size metrics related to the crown. We compare ITCs among 6 genotypes exhibiting different crown architectures in managed loblolly pine (<i>Pinus taeda L.</i>) in the United States. DLS data are classified into ITC objects, and we present novel methods to calculate ITC shape metrics. Tree stems are located using (a) model-based clustering and (b) weighting cluster-based size. We generated ITC shape metrics using 3-dimensional (3D) alphashapes in 2 DLS acquisitions of the same location, 4 years apart. Crown horizontal distance from the stem was estimated at multiple heights, in addition to calculating 3D volume in specific azimuths. Crown morphologies varied significantly (<i>P</i> < 0.05) spatially, temporally, and among the 6 genotypes. Most genotypes exhibited larger crown volumes facing south (150° to 173°). We found that crown asymmetries were consistent with (a) the direction of solar radiation, (b) the spatial arrangement and proximity of the neighboring crowns, and (c) genotype. Larger crowns were consistent with larger increases in stem volume, but that increases in the southern portions of crown volume were consistent with larger stem volume increases, than in the north. This finding suggests that row orientation could influence stem growth rates in plantations, particularly impacting earlier development. These differences can potentially reduce over time, especially if stands are not thinned in a timely manner once canopy growing space has diminished.</p>","PeriodicalId":20318,"journal":{"name":"Plant Phenomics","volume":"6 ","pages":"0264"},"PeriodicalIF":7.6,"publicationDate":"2024-10-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11496608/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142506482","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Cucumber Seedling Segmentation Network Based on a Multiview Geometric Graph Encoder from 3D Point Clouds. 基于三维点云多视角几何图编码器的黄瓜幼苗分割网络。
IF 7.6 1区 农林科学 Q1 AGRONOMY Pub Date : 2024-10-16 eCollection Date: 2024-01-01 DOI: 10.34133/plantphenomics.0254
Yonglong Zhang, Yaling Xie, Jialuo Zhou, Xiangying Xu, Minmin Miao

Plant phenotyping plays a pivotal role in observing and comprehending the growth and development of plants. In phenotyping, plant organ segmentation based on 3D point clouds has garnered increasing attention in recent years. However, using only the geometric relationship features of Euclidean space still cannot accurately segment and measure plants. To this end, we mine more geometric features and propose a segmentation network based on a multiview geometric graph encoder, called SN-MGGE. First, we construct a point cloud acquisition platform to obtain the cucumber seedling point cloud dataset, and employ CloudCompare software to annotate the point cloud data. The GGE module is then designed to generate the point features, including the geometric relationships and geometric shape structure, via a graph encoder over the Euclidean and hyperbolic spaces. Finally, the semantic segmentation results are obtained via a downsampling operation and multilayer perceptron. Extensive experiments on a cucumber seedling dataset clearly show that our proposed SN-MGGE network outperforms several mainstream segmentation networks (e.g., PointNet++, AGConv, and PointMLP), achieving mIoU and OA values of 94.90% and 97.43%, respectively. On the basis of the segmentation results, 4 phenotypic parameters (i.e., plant height, leaf length, leaf width, and leaf area) are extracted through the K-means clustering method; these parameters are very close to the ground truth, and the R 2 values reach 0.98, 0.96, 0.97, and 0.97, respectively. Furthermore, an ablation study and a generalization experiment also show that the SN-MGGE network is robust and extensive.

植物表型分析在观察和理解植物的生长发育过程中起着举足轻重的作用。在表型分析中,基于三维点云的植物器官分割近年来受到越来越多的关注。然而,仅利用欧几里得空间的几何关系特征仍然无法准确地分割和测量植物。为此,我们挖掘了更多的几何特征,并提出了一种基于多视图几何图编码器的分割网络,称为 SN-MGGE。首先,我们构建了一个点云采集平台来获取黄瓜幼苗点云数据集,并使用 CloudCompare 软件对点云数据进行标注。然后设计 GGE 模块,通过欧几里得空间和双曲空间上的图编码器生成点特征,包括几何关系和几何形状结构。最后,通过下采样操作和多层感知器获得语义分割结果。在黄瓜幼苗数据集上进行的大量实验清楚地表明,我们提出的 SN-MGGE 网络优于几种主流分割网络(如 PointNet++、AGConv 和 PointMLP),mIoU 和 OA 值分别达到 94.90% 和 97.43%。在分割结果的基础上,通过 K-means 聚类方法提取了 4 个表型参数(即株高、叶长、叶宽和叶面积);这些参数与地面实况非常接近,R 2 值分别达到 0.98、0.96、0.97 和 0.97。此外,一项消融研究和一项泛化实验也表明,SN-MGGE 网络具有鲁棒性和广泛性。
{"title":"Cucumber Seedling Segmentation Network Based on a Multiview Geometric Graph Encoder from 3D Point Clouds.","authors":"Yonglong Zhang, Yaling Xie, Jialuo Zhou, Xiangying Xu, Minmin Miao","doi":"10.34133/plantphenomics.0254","DOIUrl":"https://doi.org/10.34133/plantphenomics.0254","url":null,"abstract":"<p><p>Plant phenotyping plays a pivotal role in observing and comprehending the growth and development of plants. In phenotyping, plant organ segmentation based on 3D point clouds has garnered increasing attention in recent years. However, using only the geometric relationship features of Euclidean space still cannot accurately segment and measure plants. To this end, we mine more geometric features and propose a segmentation network based on a multiview geometric graph encoder, called SN-MGGE. First, we construct a point cloud acquisition platform to obtain the cucumber seedling point cloud dataset, and employ CloudCompare software to annotate the point cloud data. The GGE module is then designed to generate the point features, including the geometric relationships and geometric shape structure, via a graph encoder over the Euclidean and hyperbolic spaces. Finally, the semantic segmentation results are obtained via a downsampling operation and multilayer perceptron. Extensive experiments on a cucumber seedling dataset clearly show that our proposed SN-MGGE network outperforms several mainstream segmentation networks (e.g., PointNet++, AGConv, and PointMLP), achieving mIoU and OA values of 94.90% and 97.43%, respectively. On the basis of the segmentation results, 4 phenotypic parameters (i.e., plant height, leaf length, leaf width, and leaf area) are extracted through the K-means clustering method; these parameters are very close to the ground truth, and the <i>R</i> <sup>2</sup> values reach 0.98, 0.96, 0.97, and 0.97, respectively. Furthermore, an ablation study and a generalization experiment also show that the SN-MGGE network is robust and extensive.</p>","PeriodicalId":20318,"journal":{"name":"Plant Phenomics","volume":"6 ","pages":"0254"},"PeriodicalIF":7.6,"publicationDate":"2024-10-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11480588/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142472839","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
GSP-AI: An AI-Powered Platform for Identifying Key Growth Stages and the Vegetative-to-Reproductive Transition in Wheat Using Trilateral Drone Imagery and Meteorological Data. GSP-AI:利用三边无人机图像和气象数据识别小麦关键生长阶段和无性到生殖过渡的人工智能平台。
IF 7.6 1区 农林科学 Q1 AGRONOMY Pub Date : 2024-10-09 eCollection Date: 2024-01-01 DOI: 10.34133/plantphenomics.0255
Liyan Shen, Guohui Ding, Robert Jackson, Mujahid Ali, Shuchen Liu, Arthur Mitchell, Yeyin Shi, Xuqi Lu, Jie Dai, Greg Deakin, Katherine Frels, Haiyan Cen, Yu-Feng Ge, Ji Zhou

Wheat (Triticum aestivum) is one of the most important staple crops worldwide. To ensure its global supply, the timing and duration of its growth cycle needs to be closely monitored in the field so that necessary crop management activities can be arranged in a timely manner. Also, breeders and plant researchers need to evaluate growth stages (GSs) for tens of thousands of genotypes at the plot level, at different sites and across multiple seasons. These indicate the importance of providing a reliable and scalable toolkit to address the challenge so that the plot-level assessment of GS can be successfully conducted for different objectives in plant research. Here, we present a multimodal deep learning model called GSP-AI, capable of identifying key GSs and predicting the vegetative-to-reproductive transition (i.e., flowering days) in wheat based on drone-collected canopy images and multiseasonal climatic datasets. In the study, we first established an open Wheat Growth Stage Prediction (WGSP) dataset, consisting of 70,410 annotated images collected from 54 varieties cultivated in China, 109 in the United Kingdom, and 100 in the United States together with key climatic factors. Then, we built an effective learning architecture based on Res2Net and long short-term memory (LSTM) to learn canopy-level vision features and patterns of climatic changes between 2018 and 2021 growing seasons. Utilizing the model, we achieved an overall accuracy of 91.2% in identifying key GS and an average root mean square error (RMSE) of 5.6 d for forecasting the flowering days compared with manual scoring. We further tested and improved the GSP-AI model with high-resolution smartphone images collected in the 2021/2022 season in China, through which the accuracy of the model was enhanced to 93.4% for GS and RMSE reduced to 4.7 d for the flowering prediction. As a result, we believe that our work demonstrates a valuable advance to inform breeders and growers regarding the timing and duration of key plant growth and development phases at the plot level, facilitating them to conduct more effective crop selection and make agronomic decisions under complicated field conditions for wheat improvement.

小麦(Triticum aestivum)是全球最重要的主粮作物之一。为确保其全球供应,需要在田间密切监测其生长周期的时间和持续时间,以便及时安排必要的作物管理活动。此外,育种人员和植物研究人员还需要在不同地点和多个季节对数以万计的基因型进行小区级生长阶段(GSs)评估。这些都表明,提供一个可靠且可扩展的工具包来应对这一挑战非常重要,这样就能成功地针对植物研究的不同目标进行小区级 GS 评估。在此,我们提出了一种名为 GSP-AI 的多模态深度学习模型,该模型能够基于无人机采集的冠层图像和多季节气候数据集,识别关键的 GSs 并预测小麦的无性到生殖过渡(即开花天数)。在这项研究中,我们首先建立了一个开放的小麦生长阶段预测(WGSP)数据集,该数据集包括从中国种植的54个品种、英国种植的109个品种和美国种植的100个品种中收集的70,410张注释图像以及关键气候因子。然后,我们建立了一个基于 Res2Net 和长短期记忆(LSTM)的有效学习架构,以学习冠层视觉特征和 2018 年至 2021 年生长季的气候变化规律。利用该模型,我们在识别关键GS方面取得了91.2%的总体准确率,与人工评分相比,花期预测的平均均方根误差(RMSE)为5.6 d。我们利用在中国 2021/2022 年采集的高分辨率智能手机图像进一步测试和改进了 GSP-AI 模型,通过该模型,GS 的准确率提高到 93.4%,花期预测的均方根误差降低到 4.7 d。因此,我们相信,我们的工作展示了一项宝贵的进步,为育种者和种植者提供了有关地块层面植物生长发育关键阶段的时间和持续时间的信息,有助于他们在复杂的田间条件下进行更有效的作物选择和农艺决策,从而改良小麦。
{"title":"GSP-AI: An AI-Powered Platform for Identifying Key Growth Stages and the Vegetative-to-Reproductive Transition in Wheat Using Trilateral Drone Imagery and Meteorological Data.","authors":"Liyan Shen, Guohui Ding, Robert Jackson, Mujahid Ali, Shuchen Liu, Arthur Mitchell, Yeyin Shi, Xuqi Lu, Jie Dai, Greg Deakin, Katherine Frels, Haiyan Cen, Yu-Feng Ge, Ji Zhou","doi":"10.34133/plantphenomics.0255","DOIUrl":"10.34133/plantphenomics.0255","url":null,"abstract":"<p><p>Wheat (<i>Triticum aestivum</i>) is one of the most important staple crops worldwide. To ensure its global supply, the timing and duration of its growth cycle needs to be closely monitored in the field so that necessary crop management activities can be arranged in a timely manner. Also, breeders and plant researchers need to evaluate growth stages (GSs) for tens of thousands of genotypes at the plot level, at different sites and across multiple seasons. These indicate the importance of providing a reliable and scalable toolkit to address the challenge so that the plot-level assessment of GS can be successfully conducted for different objectives in plant research. Here, we present a multimodal deep learning model called GSP-AI, capable of identifying key GSs and predicting the vegetative-to-reproductive transition (i.e., flowering days) in wheat based on drone-collected canopy images and multiseasonal climatic datasets. In the study, we first established an open Wheat Growth Stage Prediction (WGSP) dataset, consisting of 70,410 annotated images collected from 54 varieties cultivated in China, 109 in the United Kingdom, and 100 in the United States together with key climatic factors. Then, we built an effective learning architecture based on Res2Net and long short-term memory (LSTM) to learn canopy-level vision features and patterns of climatic changes between 2018 and 2021 growing seasons. Utilizing the model, we achieved an overall accuracy of 91.2% in identifying key GS and an average root mean square error (RMSE) of 5.6 d for forecasting the flowering days compared with manual scoring. We further tested and improved the GSP-AI model with high-resolution smartphone images collected in the 2021/2022 season in China, through which the accuracy of the model was enhanced to 93.4% for GS and RMSE reduced to 4.7 d for the flowering prediction. As a result, we believe that our work demonstrates a valuable advance to inform breeders and growers regarding the timing and duration of key plant growth and development phases at the plot level, facilitating them to conduct more effective crop selection and make agronomic decisions under complicated field conditions for wheat improvement.</p>","PeriodicalId":20318,"journal":{"name":"Plant Phenomics","volume":"6 ","pages":"0255"},"PeriodicalIF":7.6,"publicationDate":"2024-10-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11462051/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142392656","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
MLG-YOLO: A Model for Real-Time Accurate Detection and Localization of Winter Jujube in Complex Structured Orchard Environments. MLG-YOLO:在结构复杂的果园环境中实时准确检测和定位冬枣的模型。
IF 7.6 1区 农林科学 Q1 AGRONOMY Pub Date : 2024-09-23 eCollection Date: 2024-01-01 DOI: 10.34133/plantphenomics.0258
Chenhao Yu, Xiaoyi Shi, Wenkai Luo, Junzhe Feng, Zhouzhou Zheng, Ayanori Yorozu, Yaohua Hu, Jiapan Guo

Our research focuses on winter jujube trees and is conducted in a greenhouse environment in a structured orchard to effectively control various growth conditions. The development of a robotic system for winter jujube harvesting is crucial for achieving mechanized harvesting. Harvesting winter jujubes efficiently requires accurate detection and location. To address this issue, we proposed a winter jujube detection and localization method based on the MobileVit-Large selective kernel-GSConv-YOLO (MLG-YOLO) model. First, a winter jujube dataset is constructed to comprise various scenarios of lighting conditions and leaf obstructions to train the model. Subsequently, the MLG-YOLO model based on YOLOv8n is proposed, with improvements including the incorporation of MobileViT to reconstruct the backbone and keep the model more lightweight. The neck is enhanced with LSKblock to capture broader contextual information, and the lightweight convolutional technology GSConv is introduced to further improve the detection accuracy. Finally, a 3-dimensional localization method combining MLG-YOLO with RGB-D cameras is proposed. Through ablation studies, comparative experiments, 3-dimensional localization error tests, and full-scale tree detection tests in laboratory environments and structured orchard environments, the effectiveness of the MLG-YOLO model in detecting and locating winter jujubes is confirmed. With MLG-YOLO, the mAP increases by 3.50%, while the number of parameters is reduced by 61.03% in comparison with the baseline YOLOv8n model. Compared with mainstream object detection models, MLG-YOLO excels in both detection accuracy and model size, with a mAP of 92.70%, a precision of 86.80%, a recall of 84.50%, and a model size of only 2.52 MB. The average detection accuracy in the laboratory environmental testing of winter jujube reached 100%, and the structured orchard environmental accuracy reached 92.82%. The absolute positioning errors in the X, Y, and Z directions are 4.20, 4.70, and 3.90 mm, respectively. This method enables accurate detection and localization of winter jujubes, providing technical support for winter jujube harvesting robots.

我们的研究重点是冬枣树,在温室环境下的结构化果园中进行,以有效控制各种生长条件。开发冬枣收获机器人系统对于实现机械化收获至关重要。高效收获冬枣需要准确的检测和定位。为解决这一问题,我们提出了一种基于 MobileVit-Large selective kernel-GSConv-YOLO (MLG-YOLO) 模型的冬枣检测和定位方法。首先,构建一个包含各种光照条件和叶片遮挡情况的冬枣数据集来训练模型。随后,提出了基于 YOLOv8n 的 MLG-YOLO 模型,并对其进行了改进,包括加入 MobileViT 来重建骨干网,使模型更加轻量级。利用 LSKblock 增强了颈部,以捕捉更广泛的上下文信息,并引入了轻量级卷积技术 GSConv,以进一步提高检测精度。最后,提出了一种结合 MLG-YOLO 和 RGB-D 相机的三维定位方法。通过在实验室环境和结构化果园环境中进行的烧蚀研究、对比实验、三维定位误差测试和全尺寸树木检测测试,证实了 MLG-YOLO 模型在检测和定位冬枣方面的有效性。与基准 YOLOv8n 模型相比,MLG-YOLO 的 mAP 增加了 3.50%,参数数量减少了 61.03%。与主流的物体检测模型相比,MLG-YOLO 在检测准确率和模型大小方面都表现出色,其 mAP 为 92.70%,准确率为 86.80%,召回率为 84.50%,模型大小仅为 2.52 MB。冬枣实验室环境测试的平均检测准确率达到 100%,结构化果园环境准确率达到 92.82%。X、Y和Z方向的绝对定位误差分别为4.20、4.70和3.90毫米。该方法实现了冬枣的精确检测和定位,为冬枣收获机器人提供了技术支持。
{"title":"MLG-YOLO: A Model for Real-Time Accurate Detection and Localization of Winter Jujube in Complex Structured Orchard Environments.","authors":"Chenhao Yu, Xiaoyi Shi, Wenkai Luo, Junzhe Feng, Zhouzhou Zheng, Ayanori Yorozu, Yaohua Hu, Jiapan Guo","doi":"10.34133/plantphenomics.0258","DOIUrl":"10.34133/plantphenomics.0258","url":null,"abstract":"<p><p>Our research focuses on winter jujube trees and is conducted in a greenhouse environment in a structured orchard to effectively control various growth conditions. The development of a robotic system for winter jujube harvesting is crucial for achieving mechanized harvesting. Harvesting winter jujubes efficiently requires accurate detection and location. To address this issue, we proposed a winter jujube detection and localization method based on the MobileVit-Large selective kernel-GSConv-YOLO (MLG-YOLO) model. First, a winter jujube dataset is constructed to comprise various scenarios of lighting conditions and leaf obstructions to train the model. Subsequently, the MLG-YOLO model based on YOLOv8n is proposed, with improvements including the incorporation of MobileViT to reconstruct the backbone and keep the model more lightweight. The neck is enhanced with LSKblock to capture broader contextual information, and the lightweight convolutional technology GSConv is introduced to further improve the detection accuracy. Finally, a 3-dimensional localization method combining MLG-YOLO with RGB-D cameras is proposed. Through ablation studies, comparative experiments, 3-dimensional localization error tests, and full-scale tree detection tests in laboratory environments and structured orchard environments, the effectiveness of the MLG-YOLO model in detecting and locating winter jujubes is confirmed. With MLG-YOLO, the mAP increases by 3.50%, while the number of parameters is reduced by 61.03% in comparison with the baseline YOLOv8n model. Compared with mainstream object detection models, MLG-YOLO excels in both detection accuracy and model size, with a mAP of 92.70%, a precision of 86.80%, a recall of 84.50%, and a model size of only 2.52 MB. The average detection accuracy in the laboratory environmental testing of winter jujube reached 100%, and the structured orchard environmental accuracy reached 92.82%. The absolute positioning errors in the <i>X</i>, <i>Y</i>, and <i>Z</i> directions are 4.20, 4.70, and 3.90 mm, respectively. This method enables accurate detection and localization of winter jujubes, providing technical support for winter jujube harvesting robots.</p>","PeriodicalId":20318,"journal":{"name":"Plant Phenomics","volume":"6 ","pages":"0258"},"PeriodicalIF":7.6,"publicationDate":"2024-09-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11418275/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142308443","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Fruit Water Stress Index of Apple Measured by Means of Temperature-Annotated 3D Point Cloud. 利用温度注释三维点云测量苹果果实水分胁迫指数
IF 6.5 1区 农林科学 Q1 AGRONOMY Pub Date : 2024-09-18 DOI: 10.34133/plantphenomics.0252
Nikos Tsoulias,Arash Khosravi,Werner B Herppich,Manuela Zude-Sasse
In applied ecophysiological studies related to global warming and water scarcity, the water status of fruit is of increasing importance in the context of fresh food production. In the present work, a fruit water stress index (FWSI) is introduced for close analysis of the relationship between fruit and air temperatures. A sensor system consisting of light detection and ranging (LiDAR) sensor and thermal camera was employed to remotely analyze apple trees (Malus x domestica Borkh. "Gala") by means of 3D point clouds. After geometric calibration of the sensor system, the temperature values were assigned in the corresponding 3D point cloud to reconstruct a thermal point cloud of the entire canopy. The annotated points belonging to the fruit were segmented, providing annotated fruit point clouds. Such estimated 3D distribution of fruit surface temperature (T Est) was highly correlated to manually recorded reference temperature (r 2 = 0.93). As methodological innovation, based on T Est, the fruit water stress index (FWSI Est) was introduced, potentially providing more detailed information on the fruit compared to the crop water stress index of whole canopy obtained from established 2D thermal imaging. FWSI Est showed low error when compared to manual reference data. Considering in total 302 apples, FWSI Est increased during the season. Additional diel measurements on 50 apples, each at 6 measurements per day (in total 600 apples), were performed in the commercial harvest window. FWSI Est calculated with air temperature plus 5 °C appeared as diel hysteresis. Such diurnal changes of FWSI Est and those throughout fruit development provide a new ecophysiological tool aimed at 3D spatiotemporal fruit analysis and particularly more efficient, capturing more samples, insight in the specific requests of crop management.
在与全球变暖和水资源短缺有关的应用生态生理学研究中,水果的水分状况在新鲜食品生产中的重要性与日俱增。本研究引入了水果水分胁迫指数(FWSI),用于密切分析水果与气温之间的关系。采用由光探测和测距(LiDAR)传感器和热像仪组成的传感器系统,通过三维点云对苹果树(Malus x domestica Borkh. "Gala")进行远程分析。在对传感器系统进行几何校准后,将温度值分配到相应的三维点云中,以重建整个树冠的热点云。对属于果实的注释点进行分割,提供注释果实点云。这种估计的果实表面温度三维分布(T Est)与人工记录的参考温度高度相关(r 2 = 0.93)。作为方法上的创新,在 T Est 的基础上引入了果实水分胁迫指数(FWSI Est),与通过已有的二维热成像技术获得的整个冠层的作物水分胁迫指数相比,FWSI Est 有可能提供更详细的果实信息。与人工参考数据相比,FWSI Est 的误差较小。考虑到总共有 302 个苹果,FWSI Est 在季节中有所增加。在商业收获期还对 50 个苹果进行了日间测量,每个苹果每天测量 6 次(共 600 个苹果)。以气温加 5 °C 计算的 FWSI Est 出现了昼夜滞后现象。FWSI Est 的这种昼夜变化以及果实发育过程中的昼夜变化为三维时空果实分析提供了一种新的生态生理学工具,特别是能更有效地捕捉更多样本,深入了解作物管理的具体要求。
{"title":"Fruit Water Stress Index of Apple Measured by Means of Temperature-Annotated 3D Point Cloud.","authors":"Nikos Tsoulias,Arash Khosravi,Werner B Herppich,Manuela Zude-Sasse","doi":"10.34133/plantphenomics.0252","DOIUrl":"https://doi.org/10.34133/plantphenomics.0252","url":null,"abstract":"In applied ecophysiological studies related to global warming and water scarcity, the water status of fruit is of increasing importance in the context of fresh food production. In the present work, a fruit water stress index (FWSI) is introduced for close analysis of the relationship between fruit and air temperatures. A sensor system consisting of light detection and ranging (LiDAR) sensor and thermal camera was employed to remotely analyze apple trees (Malus x domestica Borkh. \"Gala\") by means of 3D point clouds. After geometric calibration of the sensor system, the temperature values were assigned in the corresponding 3D point cloud to reconstruct a thermal point cloud of the entire canopy. The annotated points belonging to the fruit were segmented, providing annotated fruit point clouds. Such estimated 3D distribution of fruit surface temperature (T Est) was highly correlated to manually recorded reference temperature (r 2 = 0.93). As methodological innovation, based on T Est, the fruit water stress index (FWSI Est) was introduced, potentially providing more detailed information on the fruit compared to the crop water stress index of whole canopy obtained from established 2D thermal imaging. FWSI Est showed low error when compared to manual reference data. Considering in total 302 apples, FWSI Est increased during the season. Additional diel measurements on 50 apples, each at 6 measurements per day (in total 600 apples), were performed in the commercial harvest window. FWSI Est calculated with air temperature plus 5 °C appeared as diel hysteresis. Such diurnal changes of FWSI Est and those throughout fruit development provide a new ecophysiological tool aimed at 3D spatiotemporal fruit analysis and particularly more efficient, capturing more samples, insight in the specific requests of crop management.","PeriodicalId":20318,"journal":{"name":"Plant Phenomics","volume":"18 1","pages":"0252"},"PeriodicalIF":6.5,"publicationDate":"2024-09-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142248676","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Rape Yield Estimation Considering Non-Foliar Green Organs Based on the General Crop Growth Model. 基于作物生长模型的考虑非叶面绿色器官的油菜产量估算
IF 7.6 1区 农林科学 Q1 AGRONOMY Pub Date : 2024-09-17 eCollection Date: 2024-01-01 DOI: 10.34133/plantphenomics.0253
Shiwei Ruan, Hong Cao, Shangrong Wu, Yujing Ma, Wenjuan Li, Yong Jin, Hui Deng, Guipeng Chen, Wenbin Wu, Peng Yang

To address the underestimation of rape yield by traditional gramineous crop yield simulation methods based on crop models, this study used the WOFOST crop model to estimate rape yield in the main producing areas of southern Hunan based on 2 years of field-measured data, with consideration given to the photosynthesis of siliques, which are non-foliar green organs. First, the total photosynthetic area index (TPAI), which considers the photosynthesis of siliques, was proposed as a substitute for the leaf area index (LAI) as the calibration variable in the model. Two parameter calibration methods were subsequently proposed, both of which consider photosynthesis by siliques: the TPAI-SPA method, which is based on the TPAI coupled with a specific pod area, and the TPAI-Curve method, which is based on the TPAI and curve fitting. Finally, the 2 proposed parameter calibration methods were validated via 2 years of observed rape data. The results indicate that compared with traditional LAI-based crop model calibration methods, the TPAI-SPA and TPAI-Curve methods can improve the accuracy of rape yield estimation. The estimation accuracy (R 2) for the total weight of storage organs (TWSO) and above-ground biomass (TAGP) increased by 9.68% and 49.86%, respectively, for the TPAI-SPA method and by 14.04% and 42.94%, respectively, for the TPAI-Curve method. Thus, the 2 calibration methods proposed in this study are of important practical importance for improving the accuracy of rape yield simulations. This study provides a novel technical approach for utilizing crop growth models in the yield estimation of oilseed crops.

为解决传统基于作物模型的禾本科作物产量模拟方法对油菜产量的低估问题,本研究利用WOFOST作物模型,基于2年的田间实测数据,考虑非叶面绿色器官硅油的光合作用,对湘南主产区的油菜产量进行了估算。首先,提出了考虑硅藻光合作用的总光合面积指数(TPAI)代替叶面积指数(LAI)作为模型的标定变量。随后提出了两种考虑硅藻光合作用的参数校准方法:基于TPAI耦合特定荚果面积的TPAI- spa法和基于TPAI和曲线拟合的TPAI- curve法。最后,通过2年的油菜观测数据对所提出的2种参数校准方法进行了验证。结果表明,与传统的基于lai的作物模型校准方法相比,TPAI-SPA和TPAI-Curve方法可以提高油菜产量估算的精度。TPAI-SPA法和TPAI-Curve法对贮藏器官总重(TWSO)和地上生物量(TAGP)的估计精度(r2)分别提高了9.68%和49.86%和14.04%和42.94%。因此,本文提出的两种校正方法对于提高油菜产量模拟的准确性具有重要的现实意义。本研究为利用作物生长模型估算油料作物产量提供了一种新的技术途径。
{"title":"Rape Yield Estimation Considering Non-Foliar Green Organs Based on the General Crop Growth Model.","authors":"Shiwei Ruan, Hong Cao, Shangrong Wu, Yujing Ma, Wenjuan Li, Yong Jin, Hui Deng, Guipeng Chen, Wenbin Wu, Peng Yang","doi":"10.34133/plantphenomics.0253","DOIUrl":"10.34133/plantphenomics.0253","url":null,"abstract":"<p><p>To address the underestimation of rape yield by traditional gramineous crop yield simulation methods based on crop models, this study used the WOFOST crop model to estimate rape yield in the main producing areas of southern Hunan based on 2 years of field-measured data, with consideration given to the photosynthesis of siliques, which are non-foliar green organs. First, the total photosynthetic area index (TPAI), which considers the photosynthesis of siliques, was proposed as a substitute for the leaf area index (LAI) as the calibration variable in the model. Two parameter calibration methods were subsequently proposed, both of which consider photosynthesis by siliques: the TPAI-SPA method, which is based on the TPAI coupled with a specific pod area, and the TPAI-Curve method, which is based on the TPAI and curve fitting. Finally, the 2 proposed parameter calibration methods were validated via 2 years of observed rape data. The results indicate that compared with traditional LAI-based crop model calibration methods, the TPAI-SPA and TPAI-Curve methods can improve the accuracy of rape yield estimation. The estimation accuracy (<i>R</i> <sup>2</sup>) for the total weight of storage organs (TWSO) and above-ground biomass (TAGP) increased by 9.68% and 49.86%, respectively, for the TPAI-SPA method and by 14.04% and 42.94%, respectively, for the TPAI-Curve method. Thus, the 2 calibration methods proposed in this study are of important practical importance for improving the accuracy of rape yield simulations. This study provides a novel technical approach for utilizing crop growth models in the yield estimation of oilseed crops.</p>","PeriodicalId":20318,"journal":{"name":"Plant Phenomics","volume":"6 ","pages":"0253"},"PeriodicalIF":7.6,"publicationDate":"2024-09-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11651415/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142847572","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Auto-LIA: The Automated Vision-Based Leaf Inclination Angle Measurement System Improves Monitoring of Plant Physiology. Auto-LIA:基于视觉的叶倾角自动测量系统改善了植物生理监测。
IF 6.5 1区 农林科学 Q1 AGRONOMY Pub Date : 2024-09-11 DOI: 10.34133/plantphenomics.0245
Sijun Jiang,Xingcai Wu,Qi Wang,Zhixun Pei,Yuxiang Wang,Jian Jin,Ying Guo,RunJiang Song,Liansheng Zang,Yong-Jin Liu,Gefei Hao
Plant sensors are commonly used in agricultural production, landscaping, and other fields to monitor plant growth and environmental parameters. As an important basic parameter in plant monitoring, leaf inclination angle (LIA) not only influences light absorption and pesticide loss but also contributes to genetic analysis and other plant phenotypic data collection. The measurements of LIA provide a basis for crop research as well as agricultural management, such as water loss, pesticide absorption, and illumination radiation. On the one hand, existing efficient solutions, represented by light detection and ranging (LiDAR), can provide the average leaf angle distribution of a plot. On the other hand, the labor-intensive schemes represented by hand measurements can show high accuracy. However, the existing methods suffer from low automation and weak leaf-plant correlation, limiting the application of individual plant leaf phenotypes. To improve the efficiency of LIA measurement and provide the correlation between leaf and plant, we design an image-phenotype-based noninvasive and efficient optical sensor measurement system, which combines multi-processes implemented via computer vision technologies and RGB images collected by physical sensing devices. Specifically, we utilize object detection to associate leaves with plants and adopt 3-dimensional reconstruction techniques to recover the spatial information of leaves in computational space. Then, we propose a spatial continuity-based segmentation algorithm combined with a graphical operation to implement the extraction of leaf key points. Finally, we seek the connection between the computational space and the actual physical space and put forward a method of leaf transformation to realize the localization and recovery of the LIA in physical space. Overall, our solution is characterized by noninvasiveness, full-process automation, and strong leaf-plant correlation, which enables efficient measurements at low cost. In this study, we validate Auto-LIA for practicality and compare the accuracy with the best solution that is acquired with an expensive and invasive LiDAR device. Our solution demonstrates its competitiveness and usability at a much lower equipment cost, with an accuracy of only 2. 5° less than that of the widely used LiDAR. As an intelligent processing system for plant sensor signals, Auto-LIA provides fully automated measurement of LIA, improving the monitoring of plant physiological information for plant protection. We make our code and data publicly available at http://autolia.samlab.cn.
植物传感器通常用于农业生产、园林绿化和其他领域,以监测植物生长和环境参数。作为植物监测的一个重要基本参数,叶倾角(LIA)不仅影响光吸收和农药流失,还有助于遗传分析和其他植物表型数据的收集。叶倾角的测量为作物研究和农业管理(如水分损失、农药吸收和光照辐射)提供了依据。一方面,以光探测和测距(LiDAR)为代表的现有高效解决方案可以提供地块的平均叶角分布。另一方面,以人工测量为代表的劳动密集型方案可以显示出较高的精确度。然而,现有方法存在自动化程度低和植物叶片相关性弱的问题,限制了对单个植物叶片表型的应用。为了提高叶片表型测量的效率,并提供叶片与植物之间的相关性,我们设计了一种基于图像表型的无创高效光学传感器测量系统,该系统结合了通过计算机视觉技术实现的多重处理和物理传感设备采集的 RGB 图像。具体来说,我们利用物体检测将叶子与植物联系起来,并采用三维重建技术在计算空间中恢复叶子的空间信息。然后,我们提出了一种基于空间连续性的分割算法,并结合图形操作来实现叶片关键点的提取。最后,我们寻求计算空间与实际物理空间之间的联系,提出了一种叶片变换的方法,以实现 LIA 在物理空间的定位和恢复。总之,我们的解决方案具有非侵入性、全过程自动化和植物叶片关联性强等特点,能以低成本实现高效测量。在这项研究中,我们验证了 Auto-LIA 的实用性,并将其准确性与使用昂贵的侵入式激光雷达设备获取的最佳解决方案进行了比较。我们的解决方案以更低的设备成本证明了其竞争力和可用性,精度仅比广泛使用的激光雷达低 2.5°。作为植物传感器信号的智能处理系统,Auto-LIA 可提供全自动的激光雷达测量,改善植物生理信息的监测,促进植物保护。我们在 http://autolia.samlab.cn 网站上公开了我们的代码和数据。
{"title":"Auto-LIA: The Automated Vision-Based Leaf Inclination Angle Measurement System Improves Monitoring of Plant Physiology.","authors":"Sijun Jiang,Xingcai Wu,Qi Wang,Zhixun Pei,Yuxiang Wang,Jian Jin,Ying Guo,RunJiang Song,Liansheng Zang,Yong-Jin Liu,Gefei Hao","doi":"10.34133/plantphenomics.0245","DOIUrl":"https://doi.org/10.34133/plantphenomics.0245","url":null,"abstract":"Plant sensors are commonly used in agricultural production, landscaping, and other fields to monitor plant growth and environmental parameters. As an important basic parameter in plant monitoring, leaf inclination angle (LIA) not only influences light absorption and pesticide loss but also contributes to genetic analysis and other plant phenotypic data collection. The measurements of LIA provide a basis for crop research as well as agricultural management, such as water loss, pesticide absorption, and illumination radiation. On the one hand, existing efficient solutions, represented by light detection and ranging (LiDAR), can provide the average leaf angle distribution of a plot. On the other hand, the labor-intensive schemes represented by hand measurements can show high accuracy. However, the existing methods suffer from low automation and weak leaf-plant correlation, limiting the application of individual plant leaf phenotypes. To improve the efficiency of LIA measurement and provide the correlation between leaf and plant, we design an image-phenotype-based noninvasive and efficient optical sensor measurement system, which combines multi-processes implemented via computer vision technologies and RGB images collected by physical sensing devices. Specifically, we utilize object detection to associate leaves with plants and adopt 3-dimensional reconstruction techniques to recover the spatial information of leaves in computational space. Then, we propose a spatial continuity-based segmentation algorithm combined with a graphical operation to implement the extraction of leaf key points. Finally, we seek the connection between the computational space and the actual physical space and put forward a method of leaf transformation to realize the localization and recovery of the LIA in physical space. Overall, our solution is characterized by noninvasiveness, full-process automation, and strong leaf-plant correlation, which enables efficient measurements at low cost. In this study, we validate Auto-LIA for practicality and compare the accuracy with the best solution that is acquired with an expensive and invasive LiDAR device. Our solution demonstrates its competitiveness and usability at a much lower equipment cost, with an accuracy of only 2. 5° less than that of the widely used LiDAR. As an intelligent processing system for plant sensor signals, Auto-LIA provides fully automated measurement of LIA, improving the monitoring of plant physiological information for plant protection. We make our code and data publicly available at http://autolia.samlab.cn.","PeriodicalId":20318,"journal":{"name":"Plant Phenomics","volume":"32 1","pages":"0245"},"PeriodicalIF":6.5,"publicationDate":"2024-09-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142209421","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Plant Phenomics
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1