首页 > 最新文献

Plant Phenomics最新文献

英文 中文
IMP2RIS, an automated plant root PET radiotracer gas delivery system for in-soil visualization of symbiotic N2 fixation in nodulated roots of soybean plants via PET imaging. IMP2RIS是一种自动化植物根系PET放射性示踪气体输送系统,用于通过PET成像显示大豆根瘤根部共生固氮的土壤内可视化。
IF 6.4 1区 农林科学 Q1 AGRONOMY Pub Date : 2025-03-14 eCollection Date: 2025-03-01 DOI: 10.1016/j.plaphe.2025.100027
Alireza Nakhforoosh, Emil Hallin, Zongyu Wang, Micheal Hogue, Hillary H Mehlhorn, Grant Tingstad, Leon Kochian

The real-time and non-invasive visualization and quantification of symbiotic nitrogen fixation (SNF) in nodulated roots of soybean plants using Positron Emission Tomography (PET) imaging, coupled with the application of [13N]N2 gas as a PET radiotracer, has been explored in only a few studies. In these studies, [13N]N2 was delivered to nodulated soybean roots suspended in air within gas-tight acrylic boxes, followed by two-dimensional (2D) PET imaging to visualize the assimilated [13N]N2 in the air-suspended root nodules. In this paper, we introduce the In-Media Plant PET Root Imaging System (IMP2RIS), a novel gas delivery system designed and constructed in-house. Unlike the previous methods, IMP2RIS allows for non-intrusive delivery and exposure of [13N]N2 gas to the nodulated roots of soybean plants grown in a clay-rich, soil-like and visually opaque growth medium. This advancement enabled in-soil, three-dimensional (3D) visualization of SNF in soybean root nodules using Sofie, a preclinical PET scanner. Equipped with automated controls, IMP2RIS ensures ease of operation and operator safety during the [13N]N2 delivery process. We describe the components and functionalities of IMP2RIS, supported by experimental results showcasing its successful application in efficient delivery and exposure of [13N]N2 gas to nodulated roots of three soybean plant cultivars that vary in rates of N2 fixation. The in-soil quantitative PET imaging of SNF, aided by IMP2RIS, holds promise for enhancing the integration of SNF as a functional phenotypic trait into breeding programs, aiming to enhance SNF efficiency by identifying breeding materials with high SNF capacities.

利用正电子发射断层扫描(PET)成像,结合[13N]N2气体作为PET放射性示踪剂,对大豆根瘤根系共生固氮(SNF)进行实时、无创可视化和定量研究,目前仅有少数研究进行了探索。在这些研究中,将[13N]N2输送到悬浮在空气中的大豆根瘤中,然后通过二维(2D) PET成像来显示空气悬浮根瘤中同化的[13N]N2。在本文中,我们介绍了In- media植物PET根成像系统(IMP2RIS),这是一种由我们自己设计和制造的新型气体输送系统。与之前的方法不同,IMP2RIS允许将[13N]N2气体非侵入性地输送和暴露于大豆植物的根瘤根部,这些大豆植物生长在富含粘土的、类似土壤的、视觉上不透明的生长介质中。这一进展使得使用Sofie(一种临床前PET扫描仪)能够在土壤中三维(3D)可视化大豆根瘤中的SNF。IMP2RIS配备了自动化控制,确保了[13N]N2输送过程中的操作便利性和操作人员的安全性。我们描述了IMP2RIS的成分和功能,并通过实验结果证明了其成功应用于三种不同氮固定率的大豆品种的根瘤根的高效输送和暴露[13N]N2气体。在IMP2RIS的辅助下,土壤中SNF的定量PET成像有望将SNF作为一种功能表型性状整合到育种计划中,旨在通过识别具有高SNF能力的育种材料来提高SNF效率。
{"title":"IMP<sup>2</sup>RIS, an automated plant root PET radiotracer gas delivery system for in-soil visualization of symbiotic N<sub>2</sub> fixation in nodulated roots of soybean plants via PET imaging.","authors":"Alireza Nakhforoosh, Emil Hallin, Zongyu Wang, Micheal Hogue, Hillary H Mehlhorn, Grant Tingstad, Leon Kochian","doi":"10.1016/j.plaphe.2025.100027","DOIUrl":"10.1016/j.plaphe.2025.100027","url":null,"abstract":"<p><p>The real-time and non-invasive visualization and quantification of symbiotic nitrogen fixation (SNF) in nodulated roots of soybean plants using Positron Emission Tomography (PET) imaging, coupled with the application of [<sup>13</sup>N]N<sub>2</sub> gas as a PET radiotracer, has been explored in only a few studies. In these studies, [<sup>13</sup>N]N<sub>2</sub> was delivered to nodulated soybean roots suspended in air within gas-tight acrylic boxes, followed by two-dimensional (2D) PET imaging to visualize the assimilated [<sup>13</sup>N]N<sub>2</sub> in the air-suspended root nodules. In this paper, we introduce the In-Media Plant PET Root Imaging System (IMP<sup>2</sup>RIS), a novel gas delivery system designed and constructed in-house. Unlike the previous methods, IMP<sup>2</sup>RIS allows for non-intrusive delivery and exposure of [<sup>13</sup>N]N<sub>2</sub> gas to the nodulated roots of soybean plants grown in a clay-rich, soil-like and visually opaque growth medium. This advancement enabled in-soil, three-dimensional (3D) visualization of SNF in soybean root nodules using Sofie, a preclinical PET scanner. Equipped with automated controls, IMP<sup>2</sup>RIS ensures ease of operation and operator safety during the [<sup>13</sup>N]N<sub>2</sub> delivery process. We describe the components and functionalities of IMP<sup>2</sup>RIS, supported by experimental results showcasing its successful application in efficient delivery and exposure of [<sup>13</sup>N]N<sub>2</sub> gas to nodulated roots of three soybean plant cultivars that vary in rates of N<sub>2</sub> fixation. The in-soil quantitative PET imaging of SNF, aided by IMP<sup>2</sup>RIS, holds promise for enhancing the integration of SNF as a functional phenotypic trait into breeding programs, aiming to enhance SNF efficiency by identifying breeding materials with high SNF capacities.</p>","PeriodicalId":20318,"journal":{"name":"Plant Phenomics","volume":"7 1","pages":"100027"},"PeriodicalIF":6.4,"publicationDate":"2025-03-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12709944/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145782571","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
3D reconstruction enables high-throughput phenotyping and quantitative genetic analysis of phyllotaxy. 三维重建使高通量表型和定量遗传分析的叶分学。
IF 6.4 1区 农林科学 Q1 AGRONOMY Pub Date : 2025-03-08 eCollection Date: 2025-03-01 DOI: 10.1016/j.plaphe.2025.100023
Jensina M Davis, Mathieu Gaillard, Michael C Tross, Nikee Shrestha, Ian Ostermann, Ryleigh J Grove, Bosheng Li, Bedrich Benes, James C Schnable

Differences in canopy architecture play a role in determining both the light and water use efficiency. Canopy architecture is determined by several component traits, including leaf length, width, number, angle, and phyllotaxy. Phyllotaxy may be among the most difficult of the leaf canopy traits to measure accurately across large numbers of individual plants. As a result, in simulations of the leaf canopies of grain crops such as maize and sorghum, this trait is frequently approximated as alternating 180° angles between sequential leaves. We explore the feasibility of extracting direct measurements of the phyllotaxy of sequential leaves from 3D reconstructions of individual sorghum plants generated from 2D calibrated images and test the assumption of consistently alternating phyllotaxy across a diverse set of sorghum genotypes. Using a voxel-carving-based approach, we generate 3D reconstructions from multiple calibrated 2D images of 366 sorghum plants representing 236 sorghum genotypes from the sorghum association panel. The correlation between automated and manual measurements of phyllotaxy is only modestly lower than the correlation between manual measurements of phyllotaxy generated by two different individuals. Automated phyllotaxy measurements exhibited a repeatability of R 2 ​= ​0.41 across imaging timepoints separated by a period of two days. A resampling based genome wide association study (GWAS) identified several putative genetic associations with lower-canopy phyllotaxy in sorghum. This study demonstrates the potential of 3D reconstruction to enable both quantitative genetic investigation and breeding for phyllotaxy in sorghum and other grain crops with similar plant architectures.

树冠结构的差异决定了光和水的利用效率。冠层结构是由几个组成性状决定的,包括叶片的长度、宽度、数量、角度和叶分结构。叶分性可能是在大量单株植物中最难以准确测量的叶冠性状之一。因此,在模拟玉米和高粱等谷物作物的叶冠层时,这一特性通常近似为连续叶片之间的180°交替角。我们探索了从二维校准图像生成的单个高粱植物的三维重建中提取序列叶片叶分性直接测量的可行性,并测试了在不同高粱基因型中持续交替叶分性的假设。使用基于体素雕刻的方法,我们从多个校准的二维图像中生成三维重建,这些图像来自高粱协会面板,代表236种高粱基因型的366种高粱植物。自动测量和人工测量之间的相关性仅略低于由两个不同个体产生的人工测量之间的相关性。自动叶分学测量在间隔两天的成像时间点上显示出r2 = 0.41的重复性。一项基于重采样的全基因组关联研究(GWAS)确定了几种与高粱下冠层分生有关的可能遗传关联。该研究证明了三维重建技术在高粱和其他具有类似植物结构的粮食作物叶分性的定量遗传调查和育种方面的潜力。
{"title":"3D reconstruction enables high-throughput phenotyping and quantitative genetic analysis of phyllotaxy.","authors":"Jensina M Davis, Mathieu Gaillard, Michael C Tross, Nikee Shrestha, Ian Ostermann, Ryleigh J Grove, Bosheng Li, Bedrich Benes, James C Schnable","doi":"10.1016/j.plaphe.2025.100023","DOIUrl":"10.1016/j.plaphe.2025.100023","url":null,"abstract":"<p><p>Differences in canopy architecture play a role in determining both the light and water use efficiency. Canopy architecture is determined by several component traits, including leaf length, width, number, angle, and phyllotaxy. Phyllotaxy may be among the most difficult of the leaf canopy traits to measure accurately across large numbers of individual plants. As a result, in simulations of the leaf canopies of grain crops such as maize and sorghum, this trait is frequently approximated as alternating 180° angles between sequential leaves. We explore the feasibility of extracting direct measurements of the phyllotaxy of sequential leaves from 3D reconstructions of individual sorghum plants generated from 2D calibrated images and test the assumption of consistently alternating phyllotaxy across a diverse set of sorghum genotypes. Using a voxel-carving-based approach, we generate 3D reconstructions from multiple calibrated 2D images of 366 sorghum plants representing 236 sorghum genotypes from the sorghum association panel. The correlation between automated and manual measurements of phyllotaxy is only modestly lower than the correlation between manual measurements of phyllotaxy generated by two different individuals. Automated phyllotaxy measurements exhibited a repeatability of <i>R</i> <sup>2</sup> ​= ​0.41 across imaging timepoints separated by a period of two days. A resampling based genome wide association study (GWAS) identified several putative genetic associations with lower-canopy phyllotaxy in sorghum. This study demonstrates the potential of 3D reconstruction to enable both quantitative genetic investigation and breeding for phyllotaxy in sorghum and other grain crops with similar plant architectures.</p>","PeriodicalId":20318,"journal":{"name":"Plant Phenomics","volume":"7 1","pages":"100023"},"PeriodicalIF":6.4,"publicationDate":"2025-03-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12710043/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145782271","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Retrieving the chlorophyll content of individual apple trees by reducing canopy shadow impact via a 3D radiative transfer model and UAV multispectral imagery. 利用三维辐射传输模型和无人机多光谱图像,减少树冠阴影影响,反演单株苹果树叶绿素含量。
IF 6.4 1区 农林科学 Q1 AGRONOMY Pub Date : 2025-03-06 eCollection Date: 2025-03-01 DOI: 10.1016/j.plaphe.2025.100015
Chengjian Zhang, Zhibo Chen, Riqiang Chen, Wenjie Zhang, Dan Zhao, Guijun Yang, Bo Xu, Haikuan Feng, Hao Yang

Accurate monitoring and spatial distribution of the leaf chlorophyll content (LCC) and canopy chlorophyll content (CCC) of individual apple trees are highly important for the effective management of individual plants and the promotion of the construction of modern smart orchards. However, the estimation of LCC and CCC is affected by shadows caused by canopy structure and observation geometry. In this study, we resolved the response relationship between individual apple tree crown spectra and shadows through a three-dimensional radiative transfer model (3D RTM) and unmanned aerial vehicle (UAV) multispectral images, assessed the resistance of a series of vegetation indices (VIs) to shadows and developed a hybrid inversion model that is resistant to shadow interference. The results revealed that (1) the proportion of individual tree canopy shadows exhibited a parabolic trend with time, with a minimum occurring at noon. Correspondingly, the reflectance in the visible band decreased with increasing canopy shadow ratio and reached a maximum value at noon, whereas the pattern of change in the reflectance in the near-infrared band was opposite that in the visible band. (2) The accuracy of chlorophyll content estimation varies among different VIs at different canopy shadow ratios. The top five VIs that are most resistant to changes in canopy shadow ratios are the NDVI-RE, Cire, Cigreen, TVI, and GNDVI. (3) For the constructed 3D RTM ​+ ​GPR hybrid inversion model, only four VIs, namely, NDVI-RE, Cire, Cigreen, and TVI, need to be input to achieve the best inversion accuracy. (4) Both the LCC and the CCC of individual trees had good validation accuracy (LCC: R2 ​= ​0.775, RMSE ​= ​6.86 ​μg/cm2, nRMSE ​= ​12.24 ​%; CCC: R2 ​= ​0.784, RMSE ​= ​32.33 ​μg/cm2, and nRMSE ​= ​14.49 ​%), and their distributions at orchard scales were characterized by considerable spatial heterogeneity. This study provides ideas for investigating the response between individual tree canopy shadows and spectra and offers a new strategy for minimizing the influence of shadow effects on the accurate estimation of chlorophyll content in individual apple trees.

准确监测苹果单株叶片叶绿素含量(LCC)和冠层叶绿素含量(CCC)及其空间分布,对单株有效管理和推进现代智慧果园建设具有重要意义。然而,LCC和CCC的估计受到冠层结构和观测几何形状所产生的阴影的影响。本研究通过三维辐射传输模型(3D RTM)和无人机(UAV)多光谱影像解析了苹果单株树冠光谱与阴影的响应关系,评估了一系列植被指数(VIs)对阴影的抗性,建立了抗阴影干扰的混合反演模型。结果表明:(1)单个树冠阴影的比例随时间呈抛物线趋势,在中午出现最小值;相应的,可见光波段反射率随冠层阴影比的增加而减小,并在正午达到最大值,而近红外波段反射率的变化规律与可见光波段相反。(2)不同VIs下不同冠层阴影比下叶绿素含量估算精度存在差异。最能抵抗冠层阴影比变化的前5位VIs分别是NDVI-RE、Cire、ciggreen、TVI和GNDVI。(3)对于构建的三维RTM + GPR混合反演模型,只需输入NDVI-RE、Cire、ciggreen、TVI 4个VIs即可获得最佳反演精度。(4)单株树的LCC和CCC均具有较好的验证精度(LCC: R2 = 0.775, RMSE = 6.86, nRMSE = 12.24%; CCC: R2 = 0.784, RMSE = 32.33, nRMSE = 14.49%),且其在果园尺度上的分布具有较大的空间异质性。该研究为研究单株树冠阴影与光谱之间的响应提供了思路,并为减少阴影效应对苹果单株叶绿素含量准确估算的影响提供了新的策略。
{"title":"Retrieving the chlorophyll content of individual apple trees by reducing canopy shadow impact via a 3D radiative transfer model and UAV multispectral imagery.","authors":"Chengjian Zhang, Zhibo Chen, Riqiang Chen, Wenjie Zhang, Dan Zhao, Guijun Yang, Bo Xu, Haikuan Feng, Hao Yang","doi":"10.1016/j.plaphe.2025.100015","DOIUrl":"10.1016/j.plaphe.2025.100015","url":null,"abstract":"<p><p>Accurate monitoring and spatial distribution of the leaf chlorophyll content (LCC) and canopy chlorophyll content (CCC) of individual apple trees are highly important for the effective management of individual plants and the promotion of the construction of modern smart orchards. However, the estimation of LCC and CCC is affected by shadows caused by canopy structure and observation geometry. In this study, we resolved the response relationship between individual apple tree crown spectra and shadows through a three-dimensional radiative transfer model (3D RTM) and unmanned aerial vehicle (UAV) multispectral images, assessed the resistance of a series of vegetation indices (VIs) to shadows and developed a hybrid inversion model that is resistant to shadow interference. The results revealed that (1) the proportion of individual tree canopy shadows exhibited a parabolic trend with time, with a minimum occurring at noon. Correspondingly, the reflectance in the visible band decreased with increasing canopy shadow ratio and reached a maximum value at noon, whereas the pattern of change in the reflectance in the near-infrared band was opposite that in the visible band. (2) The accuracy of chlorophyll content estimation varies among different VIs at different canopy shadow ratios. The top five VIs that are most resistant to changes in canopy shadow ratios are the NDVI-RE, Cire, Cigreen, TVI, and GNDVI. (3) For the constructed 3D RTM ​+ ​GPR hybrid inversion model, only four VIs, namely, NDVI-RE, Cire, Cigreen, and TVI, need to be input to achieve the best inversion accuracy. (4) Both the LCC and the CCC of individual trees had good validation accuracy (LCC: R<sup>2</sup> ​= ​0.775, RMSE ​= ​6.86 ​μg/cm<sup>2</sup>, nRMSE ​= ​12.24 ​%; CCC: R<sup>2</sup> ​= ​0.784, RMSE ​= ​32.33 ​μg/cm<sup>2</sup>, and nRMSE ​= ​14.49 ​%), and their distributions at orchard scales were characterized by considerable spatial heterogeneity. This study provides ideas for investigating the response between individual tree canopy shadows and spectra and offers a new strategy for minimizing the influence of shadow effects on the accurate estimation of chlorophyll content in individual apple trees.</p>","PeriodicalId":20318,"journal":{"name":"Plant Phenomics","volume":"7 1","pages":"100015"},"PeriodicalIF":6.4,"publicationDate":"2025-03-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12710017/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145781988","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
RGB imaging-based evaluation of waterlogging tolerance in cultivated and wild chrysanthemums. 栽培菊花和野生菊花耐涝性的RGB成像评价。
IF 6.4 1区 农林科学 Q1 AGRONOMY Pub Date : 2025-03-06 eCollection Date: 2025-03-01 DOI: 10.1016/j.plaphe.2025.100019
Siyue Wang, Yang Yang, Junwei Zeng, Limin Zhao, Haibin Wang, Sumei Chen, Weimin Fang, Fei Zhang, Jiangshuo Su, Fadi Chen

Waterlogging is a major stress that impacts the chrysanthemum industry. Large-scale germplasm screening for identifying waterlogging-tolerant resources in a quick and accurate manner is essential for developing new cultivars with improved waterlogging tolerance. To overcome this phenotyping bottleneck, consumer-grade digital cameras have been used to acquire the red-green-blue (RGB) images of 180 chrysanthemum cultivars and their wild relatives under waterlogging stress and well-watered conditions. A total of 103 image-based digital traits (i-traits), including 10 morphological i-traits and 93 texture i-traits, were extracted and systematically analyzed. Most of these i-traits presented high coefficients of variation (CVs) and broad-sense heritability (H 2 ), with an average CV of 34.04 ​% and an average H 2 of 0.93. We identified several novel texture i-traits associated with the hue (H) component, which strongly correlated with the traditional waterlogging tolerance index, the membership function value of waterlogging (MFVW) (R ​= ​0.63-0.77). We further employed the random forest (RF) and gradient boosting tree (GBT) machine learning algorithms to predict aboveground biomass and MFVW on the basis of different i-trait datasets. The RF model achieved superior predictive performance, with a coefficient of determination (R 2 ) of up to 0.88 for shoot weight and 0.86 for MFVW. Moreover, a subset of the top 13 most important i-traits could accurately predict MFVW (R 2  ​> ​0.80) via the cross-validation method. A total of 10 highly tolerant resources were selected by traditional and RGB-based evaluation, and 50 ​% belonged to Artemisia. Our findings confirmed that RGB-based technology provides a promising novel approach for quantifying waterlogging response that contributes to future breeding programs and genetic dissection for waterlogging tolerance.

内涝是影响菊花产业的主要压力。大规模的种质筛选,快速准确地鉴定耐涝资源,是培育耐涝新品种的必要条件。为了克服这一表型瓶颈,使用消费级数码相机拍摄了180个菊花品种及其野生近缘种在涝渍胁迫和水分充足条件下的红绿蓝(RGB)图像。提取并系统分析了103个基于图像的数字特征(i-traits),其中形态i-traits 10个,纹理i-traits 93个。这些i-性状大多具有较高的变异系数(CV)和广义遗传力(h2),平均CV为34.04%,平均h2为0.93。我们发现了几个与色相(H)分量相关的纹理i-性状,它们与传统的耐涝指数(MFVW)有很强的相关性(R = 0.63 ~ 0.77)。我们进一步采用随机森林(RF)和梯度增强树(GBT)机器学习算法在不同i-trait数据集的基础上预测地上生物量和MFVW。RF模型具有较好的预测性能,对苗重的决定系数(r2)高达0.88,对MFVW的决定系数(r2)高达0.86。此外,前13个最重要i性状的子集通过交叉验证方法可以准确预测MFVW (R 2 bb0 0.80)。通过传统评价和rgb评价共筛选出10种高耐受性资源,其中蒿属占50%。我们的研究结果证实,基于rgb的技术为内涝响应的量化提供了一种有希望的新方法,有助于未来的育种计划和内涝耐受性的遗传解剖。
{"title":"RGB imaging-based evaluation of waterlogging tolerance in cultivated and wild chrysanthemums.","authors":"Siyue Wang, Yang Yang, Junwei Zeng, Limin Zhao, Haibin Wang, Sumei Chen, Weimin Fang, Fei Zhang, Jiangshuo Su, Fadi Chen","doi":"10.1016/j.plaphe.2025.100019","DOIUrl":"10.1016/j.plaphe.2025.100019","url":null,"abstract":"<p><p>Waterlogging is a major stress that impacts the chrysanthemum industry. Large-scale germplasm screening for identifying waterlogging-tolerant resources in a quick and accurate manner is essential for developing new cultivars with improved waterlogging tolerance. To overcome this phenotyping bottleneck, consumer-grade digital cameras have been used to acquire the red-green-blue (RGB) images of 180 chrysanthemum cultivars and their wild relatives under waterlogging stress and well-watered conditions. A total of 103 image-based digital traits (i-traits), including 10 morphological i-traits and 93 texture i-traits, were extracted and systematically analyzed. Most of these i-traits presented high coefficients of variation (<i>CVs</i>) and broad-sense heritability (<i>H</i> <sup><i>2</i></sup> ), with an average <i>CV</i> of 34.04 ​% and an average <i>H</i> <sup><i>2</i></sup> of 0.93. We identified several novel texture i-traits associated with the hue (H) component, which strongly correlated with the traditional waterlogging tolerance index, the membership function value of waterlogging (MFVW) (<i>R</i> ​= ​0.63-0.77). We further employed the random forest (RF) and gradient boosting tree (GBT) machine learning algorithms to predict aboveground biomass and MFVW on the basis of different i-trait datasets. The RF model achieved superior predictive performance, with a coefficient of determination (<i>R</i> <sup><i>2</i></sup> ) of up to 0.88 for shoot weight and 0.86 for MFVW. Moreover, a subset of the top 13 most important i-traits could accurately predict MFVW (<i>R</i> <sup><i>2</i></sup>  ​> ​0.80) via the cross-validation method. A total of 10 highly tolerant resources were selected by traditional and RGB-based evaluation, and 50 ​% belonged to <i>Artemisia</i>. Our findings confirmed that RGB-based technology provides a promising novel approach for quantifying waterlogging response that contributes to future breeding programs and genetic dissection for waterlogging tolerance.</p>","PeriodicalId":20318,"journal":{"name":"Plant Phenomics","volume":"7 1","pages":"100019"},"PeriodicalIF":6.4,"publicationDate":"2025-03-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12709946/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145781975","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Combining UAV multisensor field phenotyping and genome-wide association studies to reveal the genetic basis of plant height in cotton (Gossypium hirsutum). 结合无人机多传感器田间表型和全基因组关联研究,揭示棉花株高的遗传基础。
IF 6.4 1区 农林科学 Q1 AGRONOMY Pub Date : 2025-03-05 eCollection Date: 2025-03-01 DOI: 10.1016/j.plaphe.2025.100026
Liqiang Fan, Jiajie Yang, Xuwen Wang, Zhao Liu, Bowei Xu, Li Liu, Chenxu Gao, Xiantao Ai, Fuguang Li, Lei Gao, Yu Yu, Zuoren Yang

Plant height (PH) is a key agronomic trait influencing plant architecture. Suitable PH values for cotton are important for lodging resistance, high planting density, and mechanized harvesting, making it crucial to elucidate the mechanisms of the genetic regulation of PH. However, traditional field PH phenotyping largely relies on manual measurements, limiting its large-scale application. In this study, a high-throughput phenotyping platform based on UAV-mounted RGB and light detection and ranging (LiDAR) was developed to efficiently and accurately obtain time series PHs of 419 cotton accessions in the field. Different strategies were used to extract PH values from two sets of sensor data, and the extracted values were used to train using linear regression and machine learning methods to obtain PH predictions. These predictions were consistent with manual measurements of the PH for the LiDAR (R2 ​= ​0.934) and RGB (R2 ​= ​0.914) data. The predicted PH values were used for GWAS analysis, and 34 ​PH-related genes, two of which have been demonstrated to regulate PH in cotton, namely, GhPH1 and GhUBP15, were identified. We further identified significant differences in the expression of a new gene named GhPH_UAV1 in the stems of the G. hirsutum cultivar ZM24 harvested on the 15th, 35th, and 70th days after sowing compared with those from a dwarf mutant (pag1), which presented shortened stem and internode phenotypes. The overexpression of GhPH_UAV1 significantly promoted cotton stem development, whereas its knockout by CRISPR-Cas9 dramatically inhibited stem growth, suggesting that GhPH_UAV1 plays a positive regulatory role in cotton PH. This field-scale high-throughput phenotype monitoring platform significantly improves the ability to obtain high-quality phenotypic data from large populations, which helps overcome the imbalance between massive genotypic data and the shortage of field phenotypic data and facilitates the integration of genotype and phenotype research for crop improvement.

株高(PH)是影响植物构型的重要农艺性状。适宜的棉花PH值对棉花抗倒伏、高种植密度和机械化收获具有重要意义,阐明棉花PH的遗传调控机制至关重要。然而,传统的田间PH表型分析主要依赖于人工测量,限制了其大规模应用。为了高效、准确地获取田间419份棉花材料的时间序列ph值,本研究开发了基于无人机RGB和激光雷达(LiDAR)的高通量表型平台。使用不同的策略从两组传感器数据中提取PH值,并使用提取的值进行线性回归和机器学习方法的训练,以获得PH预测。这些预测结果与人工测量的LiDAR (R2 = 0.934)和RGB (R2 = 0.914)数据一致。利用预测的PH值进行GWAS分析,鉴定出34个PH相关基因,其中两个基因GhPH1和GhUBP15已被证实调节棉花PH。我们进一步发现,在播种后第15、35和70天收获的G. hirsutum品种ZM24茎中,一个名为GhPH_UAV1的新基因的表达与矮化突变体(pag1)相比存在显著差异,矮化突变体的茎和节间表型缩短。过表达GhPH_UAV1显著促进棉花茎秆发育,而CRISPR-Cas9敲除GhPH_UAV1显著抑制茎秆生长,提示GhPH_UAV1在棉花ph中发挥正调控作用。该大田规模高通量表型监测平台显著提高了从大群体中获取高质量表型数据的能力。这有助于克服大量基因型数据和缺乏田间表型数据之间的不平衡,促进基因型和表型研究在作物改良中的整合。
{"title":"Combining UAV multisensor field phenotyping and genome-wide association studies to reveal the genetic basis of plant height in cotton (<i>Gossypium hirsutum</i>).","authors":"Liqiang Fan, Jiajie Yang, Xuwen Wang, Zhao Liu, Bowei Xu, Li Liu, Chenxu Gao, Xiantao Ai, Fuguang Li, Lei Gao, Yu Yu, Zuoren Yang","doi":"10.1016/j.plaphe.2025.100026","DOIUrl":"10.1016/j.plaphe.2025.100026","url":null,"abstract":"<p><p>Plant height (PH) is a key agronomic trait influencing plant architecture. Suitable PH values for cotton are important for lodging resistance, high planting density, and mechanized harvesting, making it crucial to elucidate the mechanisms of the genetic regulation of PH. However, traditional field PH phenotyping largely relies on manual measurements, limiting its large-scale application. In this study, a high-throughput phenotyping platform based on UAV-mounted RGB and light detection and ranging (LiDAR) was developed to efficiently and accurately obtain time series PHs of 419 cotton accessions in the field. Different strategies were used to extract PH values from two sets of sensor data, and the extracted values were used to train using linear regression and machine learning methods to obtain PH predictions. These predictions were consistent with manual measurements of the PH for the LiDAR (R<sup>2</sup> ​= ​0.934) and RGB (R<sup>2</sup> ​= ​0.914) data. The predicted PH values were used for GWAS analysis, and 34 ​PH-related genes, two of which have been demonstrated to regulate PH in cotton, namely, <i>GhPH1</i> and <i>GhUBP15</i>, were identified. We further identified significant differences in the expression of a new gene named <i>GhPH_UAV1</i> in the stems of the <i>G. hirsutum</i> cultivar ZM24 harvested on the 15th, 35th, and 70th days after sowing compared with those from a dwarf mutant (<i>pag1</i>), which presented shortened stem and internode phenotypes. The overexpression of <i>GhPH_UAV1</i> significantly promoted cotton stem development, whereas its knockout by CRISPR-Cas9 dramatically inhibited stem growth, suggesting that <i>GhPH_UAV1</i> plays a positive regulatory role in cotton PH. This field-scale high-throughput phenotype monitoring platform significantly improves the ability to obtain high-quality phenotypic data from large populations, which helps overcome the imbalance between massive genotypic data and the shortage of field phenotypic data and facilitates the integration of genotype and phenotype research for crop improvement.</p>","PeriodicalId":20318,"journal":{"name":"Plant Phenomics","volume":"7 1","pages":"100026"},"PeriodicalIF":6.4,"publicationDate":"2025-03-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12710045/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145782375","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Segmenting vegetation from UAV images via spectral reconstruction in complex field environments. 基于光谱重建的复杂野外环境下无人机影像植被分割
IF 6.4 1区 农林科学 Q1 AGRONOMY Pub Date : 2025-03-01 DOI: 10.1016/j.plaphe.2025.100021
Zhixun Pei, Xingcai Wu, Xue Wu, Yuanyuan Xiao, Peijia Yu, Zhenran Gao, Qi Wang, Wei Guo

Segmentation of vegetation remote sensing images can minimize the interference of background, thus achieving efficient monitoring and analysis for vegetation information. The segmentation of vegetation poses a significant challenge due to the inherently complex environmental conditions. Currently, there is a growing trend of using spectral sensing combined with deep learning for field vegetation segmentation to cope with complex environments. However, two major constraints remain: the high cost of equipment required for field spectral data collection; the availability of field datasets is limited and data annotation is time-consuming and labor-intensive. To address these challenges, we propose a weakly supervised approach for field vegetation segmentation by using spectral reconstruction (SR) techniques as the foundation and drawing on the theory of vegetation index (VI). Specifically, to reduce the cost of data acquisition, we propose SRCNet and SRANet based on convolution and attention structure to reconstruct multispectral images of fields, respectively. Then, borrowing from the VI principle, we aggregate the reconstructed data to establish the connection of spectral bands, obtaining more salient vegetation information. Finally, we employ the adaptation strategy to segment the fused feature map using a weakly supervised method, which does not require manual labeling to obtain a field vegetation segmentation result. Our segmentation method can achieve a Mean Intersection over Union (MIoU) of 0.853 on real field datasets, which outperforms the existing methods. In addition, we have open-sourced a dataset of unmanned aerial vehicle (UAV) RGB-multispectral images, comprising 2358 pairs of samples, to improve the richness of remote sensing agricultural data. The code and data are available at ​https://github.com/GZU-SAMLab/VegSegment_SR, and ​http://sr-seg.samlab.cn/.

对植被遥感影像进行分割可以最大限度地减少背景的干扰,从而实现对植被信息的高效监测与分析。由于环境条件固有的复杂性,植被的分割带来了巨大的挑战。目前,利用光谱感知与深度学习相结合的方法进行野外植被分割,以应对复杂环境的发展趋势日益明显。然而,仍然存在两个主要制约因素:现场光谱数据收集所需设备的高成本;现场数据集的可用性是有限的,数据注释是费时费力的。为了解决这些问题,我们提出了一种弱监督的野外植被分割方法,该方法以光谱重建(SR)技术为基础,借鉴植被指数(VI)理论。具体来说,为了降低数据采集成本,我们分别提出了基于卷积和关注结构的SRCNet和SRANet来重建场的多光谱图像。然后,借鉴VI原理,对重构数据进行汇总,建立光谱波段的联系,获得更显著的植被信息。最后,采用自适应策略,采用弱监督方法对融合的特征图进行分割,该方法无需人工标记即可获得野外植被分割结果。我们的分割方法在实际现场数据集上可以实现0.853的平均交联(MIoU),优于现有的分割方法。此外,我们还开源了无人机rgb多光谱图像数据集,包含2358对样本,以提高遥感农业数据的丰富性。代码和数据可在https://github.com/GZU-SAMLab/VegSegment_SR和http://sr-seg.samlab.cn/上获得。
{"title":"Segmenting vegetation from UAV images via spectral reconstruction in complex field environments.","authors":"Zhixun Pei, Xingcai Wu, Xue Wu, Yuanyuan Xiao, Peijia Yu, Zhenran Gao, Qi Wang, Wei Guo","doi":"10.1016/j.plaphe.2025.100021","DOIUrl":"10.1016/j.plaphe.2025.100021","url":null,"abstract":"<p><p>Segmentation of vegetation remote sensing images can minimize the interference of background, thus achieving efficient monitoring and analysis for vegetation information. The segmentation of vegetation poses a significant challenge due to the inherently complex environmental conditions. Currently, there is a growing trend of using spectral sensing combined with deep learning for field vegetation segmentation to cope with complex environments. However, two major constraints remain: the high cost of equipment required for field spectral data collection; the availability of field datasets is limited and data annotation is time-consuming and labor-intensive. To address these challenges, we propose a weakly supervised approach for field vegetation segmentation by using spectral reconstruction (SR) techniques as the foundation and drawing on the theory of vegetation index (VI). Specifically, to reduce the cost of data acquisition, we propose SRCNet and SRANet based on convolution and attention structure to reconstruct multispectral images of fields, respectively. Then, borrowing from the VI principle, we aggregate the reconstructed data to establish the connection of spectral bands, obtaining more salient vegetation information. Finally, we employ the adaptation strategy to segment the fused feature map using a weakly supervised method, which does not require manual labeling to obtain a field vegetation segmentation result. Our segmentation method can achieve a Mean Intersection over Union (MIoU) of 0.853 on real field datasets, which outperforms the existing methods. In addition, we have open-sourced a dataset of unmanned aerial vehicle (UAV) RGB-multispectral images, comprising 2358 pairs of samples, to improve the richness of remote sensing agricultural data. The code and data are available at ​https://github.com/GZU-SAMLab/VegSegment_SR, and ​http://sr-seg.samlab.cn/.</p>","PeriodicalId":20318,"journal":{"name":"Plant Phenomics","volume":"7 1","pages":"100021"},"PeriodicalIF":6.4,"publicationDate":"2025-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12709948/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145781947","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
CVRP: A rice image dataset with high-quality annotations for image segmentation and plant phenomics research. CVRP:具有高质量注释的水稻图像数据集,用于图像分割和植物表型组学研究。
IF 6.4 1区 农林科学 Q1 AGRONOMY Pub Date : 2025-03-01 DOI: 10.1016/j.plaphe.2025.100025
Zhiyan Tang, Jiandong Sun, Yunlu Tian, Jiexiong Xu, Weikun Zhao, Gang Jiang, Jiaqi Deng, Xiangchao Gan

Machine learning models for crop image analysis and phenomics are highly important for precision agriculture and breeding and have been the subject of intensive research. However, the lack of publicly available high-quality image datasets with detailed annotations has severely hindered the development of these models. In this work, we present a comprehensive multicultivar and multiview rice plant image dataset (CVRP) created from 231 landraces and 50 modern cultivars grown under dense planting in paddy fields. The dataset includes images capturing rice plants in their natural environment, as well as indoor images focusing specifically on panicles, allowing for a detailed investigation of cultivar-specific differences. A semiautomatic annotation process using deep learning models was designed for annotations, followed by rigorous manual curation. We demonstrated the utility of the CVRP by evaluating the performance of four state-of-the-art (SOTA) semantic segmentation models. We also conducted 3D plant reconstruction with organ segmentation via images and annotations. The database not only facilitates general-purpose image-based panicle identification and segmentation but also provides valuable resources for challenging tasks such as automatic rice cultivar identification, panicle and grain counting, and 3D plant reconstruction. The database and the model for image annotation are available at https://bic.njau.edu.cn/CVRP.html.

用于作物图像分析和表型组学的机器学习模型对于精准农业和育种非常重要,并且已经成为深入研究的主题。然而,缺乏公开可用的带有详细注释的高质量图像数据集严重阻碍了这些模型的发展。在这项工作中,我们提出了一个综合的多品种和多视角水稻植物图像数据集(CVRP),该数据集由231个地方品种和50个现代品种在稻田密集种植下创建。该数据集包括在自然环境中捕捉水稻植物的图像,以及专门关注穗部的室内图像,从而可以详细调查品种特异性差异。使用深度学习模型的半自动注释过程被设计用于注释,然后是严格的手动管理。我们通过评估四种最先进的(SOTA)语义分割模型的性能来展示CVRP的实用性。我们还通过图像和注释进行了器官分割的三维植物重建。该数据库不仅为通用的基于图像的穗段识别和分割提供了便利,而且为水稻品种自动鉴定、穗粒计数和三维植株重建等具有挑战性的任务提供了宝贵的资源。用于图像注释的数据库和模型可在https://bic.njau.edu.cn/CVRP.html上获得。
{"title":"CVRP: A rice image dataset with high-quality annotations for image segmentation and plant phenomics research.","authors":"Zhiyan Tang, Jiandong Sun, Yunlu Tian, Jiexiong Xu, Weikun Zhao, Gang Jiang, Jiaqi Deng, Xiangchao Gan","doi":"10.1016/j.plaphe.2025.100025","DOIUrl":"10.1016/j.plaphe.2025.100025","url":null,"abstract":"<p><p>Machine learning models for crop image analysis and phenomics are highly important for precision agriculture and breeding and have been the subject of intensive research. However, the lack of publicly available high-quality image datasets with detailed annotations has severely hindered the development of these models. In this work, we present a comprehensive multicultivar and multiview rice plant image dataset (CVRP) created from 231 landraces and 50 modern cultivars grown under dense planting in paddy fields. The dataset includes images capturing rice plants in their natural environment, as well as indoor images focusing specifically on panicles, allowing for a detailed investigation of cultivar-specific differences. A semiautomatic annotation process using deep learning models was designed for annotations, followed by rigorous manual curation. We demonstrated the utility of the CVRP by evaluating the performance of four state-of-the-art (SOTA) semantic segmentation models. We also conducted 3D plant reconstruction with organ segmentation via images and annotations. The database not only facilitates general-purpose image-based panicle identification and segmentation but also provides valuable resources for challenging tasks such as automatic rice cultivar identification, panicle and grain counting, and 3D plant reconstruction. The database and the model for image annotation are available at https://bic.njau.edu.cn/CVRP.html.</p>","PeriodicalId":20318,"journal":{"name":"Plant Phenomics","volume":"7 1","pages":"100025"},"PeriodicalIF":6.4,"publicationDate":"2025-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12709888/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145782427","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A hybrid method for water stress evaluation of rice with the radiative transfer model and multidimensional imaging. 基于辐射转移模型和多维成像的水稻水分胁迫综合评价方法。
IF 6.4 1区 农林科学 Q1 AGRONOMY Pub Date : 2025-02-28 eCollection Date: 2025-03-01 DOI: 10.1016/j.plaphe.2025.100016
Yufan Zhang, Xiuliang Jin, Liangsheng Shi, Yu Wang, Han Qiao, Yuanyuan Zha

Water stress is a crucial environmental factor that impacts the growth and yield of rice. Complex field microclimates and fluctuating water conditions pose a considerable challenge in accurately evaluating water stress. Measurement of a particular crop trait is not sufficient for accurate evaluation of the effects of complex water stress. Four comprehensive indicators were introduced in this research, including canopy chlorophyll content (CCC) and canopy equivalent water (CEW). The response of the canopy-specific traits to different types of water stress was identified through individual plant experiments. A hybrid method integrating the PROSAIL radiative transfer model and multidimensional imaging data to retrieve these traits. The synthetic dataset generated by PROSAIL was utilized as prior knowledge for developing a pre-trained machine learning model. Subsequently, reflectance separated from hyperspectral images and phenotypic indicators extracted from front-view images were innovatively united to retrieve water stress-related traits. The results demonstrated that the hybrid method exhibited improved stability and accuracy of CCC (R ​= ​0.7920, RMSE ​= ​24.971 ​μg ​cm-2) and CEW (R ​= ​0.8250, RMSE ​= ​0.0075 ​cm) compared to both data-driven and physical inversion modeling methods. Overall, a robust and accurate method is proposed for assessing water stress in rice using a combination of radiative transfer modeling and multidimensional image-based data.

水分胁迫是影响水稻生长和产量的重要环境因子。复杂的野外小气候和波动的水分条件对准确评估水分胁迫构成了相当大的挑战。对特定作物性状的测量不足以准确评价复杂水分胁迫的影响。本研究引入了冠层叶绿素含量(CCC)和冠层等效水分(CEW) 4个综合指标。通过单株试验,确定了不同类型水分胁迫对冠层特异性性状的响应。结合PROSAIL辐射传输模型和多维成像数据的混合方法来检索这些特征。利用PROSAIL生成的合成数据集作为先验知识来开发预训练的机器学习模型。随后,创新地将高光谱图像分离的反射率和前视图像提取的表型指标结合起来,检索水分胁迫相关性状。结果表明,与数据驱动和物理反演方法相比,混合方法的CCC (R = 0.7920, RMSE = 24.971 μ cm-2)和CEW (R = 0.8250, RMSE = 0.0075 cm)的稳定性和精度均有所提高。综上所述,本文提出了一种基于辐射传输模型和多维图像数据相结合的水稻水分胁迫评估方法。
{"title":"A hybrid method for water stress evaluation of rice with the radiative transfer model and multidimensional imaging.","authors":"Yufan Zhang, Xiuliang Jin, Liangsheng Shi, Yu Wang, Han Qiao, Yuanyuan Zha","doi":"10.1016/j.plaphe.2025.100016","DOIUrl":"10.1016/j.plaphe.2025.100016","url":null,"abstract":"<p><p>Water stress is a crucial environmental factor that impacts the growth and yield of rice. Complex field microclimates and fluctuating water conditions pose a considerable challenge in accurately evaluating water stress. Measurement of a particular crop trait is not sufficient for accurate evaluation of the effects of complex water stress. Four comprehensive indicators were introduced in this research, including canopy chlorophyll content (CCC) and canopy equivalent water (CEW). The response of the canopy-specific traits to different types of water stress was identified through individual plant experiments. A hybrid method integrating the PROSAIL radiative transfer model and multidimensional imaging data to retrieve these traits. The synthetic dataset generated by PROSAIL was utilized as prior knowledge for developing a pre-trained machine learning model. Subsequently, reflectance separated from hyperspectral images and phenotypic indicators extracted from front-view images were innovatively united to retrieve water stress-related traits. The results demonstrated that the hybrid method exhibited improved stability and accuracy of CCC (R ​= ​0.7920, RMSE ​= ​24.971 ​μg ​cm<sup>-2</sup>) and CEW (R ​= ​0.8250, RMSE ​= ​0.0075 ​cm) compared to both data-driven and physical inversion modeling methods. Overall, a robust and accurate method is proposed for assessing water stress in rice using a combination of radiative transfer modeling and multidimensional image-based data.</p>","PeriodicalId":20318,"journal":{"name":"Plant Phenomics","volume":"7 1","pages":"100016"},"PeriodicalIF":6.4,"publicationDate":"2025-02-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12709993/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145782363","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A deep learning-based micro-CT image analysis pipeline for nondestructive quantification of the maize kernel internal structure. 基于深度学习的玉米籽粒内部结构无损定量微ct图像分析流水线。
IF 6.4 1区 农林科学 Q1 AGRONOMY Pub Date : 2025-02-28 eCollection Date: 2025-03-01 DOI: 10.1016/j.plaphe.2025.100022
Juan Wang, Si Yang, Chuanyu Wang, Weiliang Wen, Ying Zhang, Gui Liu, Jingyi Li, Xinyu Guo, Chunjiang Zhao

Identifying and segmenting the vitreous and starchy endosperm of maize kernels is essential for texture analysis. However, the complex internal structure of maize kernels presents several challenges. In CT (computed tomography) images, the pixel intensity differences between the vitreous and starchy endosperm regions in maize kernel CT images are not distinct, potentially leading to low segmentation accuracy or oversegmentation. Moreover, the blurred edges between the vitreous and starchy endosperm make segmentation difficult, often resulting in jagged segmentation outcomes. We propose a deep learning-based CT image analysis pipeline to examine the internal structure of maize seeds. First, CT images are acquired using a multislice CT scanner. To improve the efficiency of maize kernel CT imaging, a batch scanning method is used. Individual kernels are accurately segmented from batch-scanned CT images using the Canny algorithm. Second, we modify the conventional architecture for high-quality segmentation of the vitreous and starchy endosperm in maize kernels. The conventional U-Net is modified by integrating the CBAM (convolutional block attention module) mechanism in the encoder and the SE (squeeze-and-excitation attention) mechanism in the decoder, as well as by using the focal-Tversky loss function instead of the Dice loss, and the boundary smoothing term is weighted as an additional loss term, named CSFTU-Net. The experimental results show that the CSFTU-Net model significantly improves the ability of segmenting vitreous and starchy endosperm. Finally, a segmented mask-based method is proposed to extract phenotype parameters of maize kernel texture, including the volume of the kernel (V), volume of the vitreous endosperm (VV), volume of starchy endosperm (SV), and ratios over their respective total kernel volumes (VV/V and SV/V). The proposed pipeline facilitates the nondestructive quantification of the internal structure of maize kernels, offering valuable insights for maize breeding and processing.

鉴定和分割玉米籽粒的玻璃质胚乳和淀粉质胚乳是进行籽粒结构分析的必要条件。然而,玉米籽粒复杂的内部结构带来了一些挑战。在CT(计算机断层扫描)图像中,玉米籽粒CT图像中玻璃体和淀粉质胚乳区域的像素强度差异不明显,可能导致分割精度低或过度分割。此外,玻璃质胚乳和淀粉质胚乳之间的边缘模糊,使得分割困难,往往导致锯齿状的分割结果。我们提出了一种基于深度学习的CT图像分析管道来检测玉米种子的内部结构。首先,使用多层CT扫描仪获取CT图像。为了提高玉米仁CT成像的效率,采用了批量扫描的方法。使用Canny算法从批量扫描的CT图像中准确分割出单个核。其次,我们修改了传统的结构,以高质量地分割玉米籽粒中的玻璃体和淀粉胚乳。将编码器中的CBAM(卷积块注意模块)机制和解码器中的SE(挤压激励注意)机制结合起来,用focal-Tversky损失函数代替Dice损失,对传统的U-Net进行改进,并将边界平滑项加权作为附加损失项,命名为CSFTU-Net。实验结果表明,CSFTU-Net模型显著提高了玻璃质胚乳和淀粉质胚乳的分割能力。最后,提出了一种基于分段掩模的玉米籽粒纹理表型参数提取方法,包括籽粒体积(V)、玻璃体胚乳体积(VV)、淀粉质胚乳体积(SV)及其与籽粒总体积的比值(VV/V和SV/V)。该管道有助于玉米籽粒内部结构的无损定量,为玉米育种和加工提供有价值的见解。
{"title":"A deep learning-based micro-CT image analysis pipeline for nondestructive quantification of the maize kernel internal structure.","authors":"Juan Wang, Si Yang, Chuanyu Wang, Weiliang Wen, Ying Zhang, Gui Liu, Jingyi Li, Xinyu Guo, Chunjiang Zhao","doi":"10.1016/j.plaphe.2025.100022","DOIUrl":"10.1016/j.plaphe.2025.100022","url":null,"abstract":"<p><p>Identifying and segmenting the vitreous and starchy endosperm of maize kernels is essential for texture analysis. However, the complex internal structure of maize kernels presents several challenges. In CT (computed tomography) images, the pixel intensity differences between the vitreous and starchy endosperm regions in maize kernel CT images are not distinct, potentially leading to low segmentation accuracy or oversegmentation. Moreover, the blurred edges between the vitreous and starchy endosperm make segmentation difficult, often resulting in jagged segmentation outcomes. We propose a deep learning-based CT image analysis pipeline to examine the internal structure of maize seeds. First, CT images are acquired using a multislice CT scanner. To improve the efficiency of maize kernel CT imaging, a batch scanning method is used. Individual kernels are accurately segmented from batch-scanned CT images using the Canny algorithm. Second, we modify the conventional architecture for high-quality segmentation of the vitreous and starchy endosperm in maize kernels. The conventional U-Net is modified by integrating the CBAM (convolutional block attention module) mechanism in the encoder and the SE (squeeze-and-excitation attention) mechanism in the decoder, as well as by using the focal-Tversky loss function instead of the Dice loss, and the boundary smoothing term is weighted as an additional loss term, named CSFTU-Net. The experimental results show that the CSFTU-Net model significantly improves the ability of segmenting vitreous and starchy endosperm. Finally, a segmented mask-based method is proposed to extract phenotype parameters of maize kernel texture, including the volume of the kernel (V), volume of the vitreous endosperm (VV), volume of starchy endosperm (SV), and ratios over their respective total kernel volumes (VV/V and SV/V). The proposed pipeline facilitates the nondestructive quantification of the internal structure of maize kernels, offering valuable insights for maize breeding and processing.</p>","PeriodicalId":20318,"journal":{"name":"Plant Phenomics","volume":"7 1","pages":"100022"},"PeriodicalIF":6.4,"publicationDate":"2025-02-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12709881/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145782384","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
PlantCaFo: An efficient few-shot plant disease recognition method based on foundation models. PlantCaFo:一种基于基础模型的高效少苗植物病害识别方法。
IF 6.4 1区 农林科学 Q1 AGRONOMY Pub Date : 2025-02-28 eCollection Date: 2025-03-01 DOI: 10.1016/j.plaphe.2025.100024
Xue Jiang, Jiashi Wang, Kai Xie, Chenxi Cui, Aobo Du, Xianglong Shi, Wanneng Yang, Ruifang Zhai

Although plant disease recognition is highly important in agricultural production, traditional methods face challenges due to the high costs associated with data collection and the scarcity of samples. Few-shot plant disease identification tasks, which are based on transfer learning, can learn feature representations from a small amount of data; however, most of these methods require pretraining within the relevant domain. Recently, foundation models have demonstrated excellent performance in zero-shot and few-shot learning scenarios. In this study, we explore the potential of foundation models in plant disease recognition by proposing an efficient few-shot plant disease recognition model (PlantCaFo) based on foundation models. This model operates on an end-to-end network structure, integrating prior knowledge from multiple pretraining models. Specifically, we design a lightweight dilated contextual adapter (DCon-Adapter) to learn new knowledge from training data and use a weight decomposition matrix (WDM) to update the text weights. We test the proposed model on a public dataset, PlantVillage, and show that the model achieves an accuracy of 93.53 ​% in a "38-way 16-shot" setting. In addition, we conduct experiments on images collected from natural environments (Cassava dataset), achieving an accuracy improvement of 6.80 ​% over the baseline. To validate the model's generalization performance, we prepare an out-of-distribution dataset with 21 categories, and our model notably increases the accuracy of this dataset. Extensive experiments demonstrate that our model exhibits superior performance over other models in few-shot plant disease identification.

虽然植物病害识别在农业生产中非常重要,但由于数据收集成本高和样本稀缺,传统方法面临挑战。基于迁移学习的少枝植物病害识别任务可以从少量数据中学习特征表示;然而,这些方法大多需要在相关领域内进行预训练。近年来,基础模型在零射击和少射击的学习场景中表现优异。在本研究中,我们通过提出基于基础模型的高效少苗植物病害识别模型(PlantCaFo),探索基础模型在植物病害识别中的潜力。该模型在端到端网络结构上运行,集成了来自多个预训练模型的先验知识。具体来说,我们设计了一个轻量级的扩展上下文适配器(DCon-Adapter)来从训练数据中学习新的知识,并使用权重分解矩阵(WDM)来更新文本权重。我们在公共数据集PlantVillage上测试了所提出的模型,并表明该模型在“38-way 16-shot”设置下达到了93.53%的准确率。此外,我们对从自然环境中收集的图像(Cassava数据集)进行了实验,实现了比基线精度提高6.80%的精度。为了验证模型的泛化性能,我们准备了一个有21个类别的分布外数据集,我们的模型显著提高了该数据集的准确性。大量的实验表明,我们的模型在少量植物病害识别方面表现出优于其他模型的性能。
{"title":"PlantCaFo: An efficient few-shot plant disease recognition method based on foundation models.","authors":"Xue Jiang, Jiashi Wang, Kai Xie, Chenxi Cui, Aobo Du, Xianglong Shi, Wanneng Yang, Ruifang Zhai","doi":"10.1016/j.plaphe.2025.100024","DOIUrl":"10.1016/j.plaphe.2025.100024","url":null,"abstract":"<p><p>Although plant disease recognition is highly important in agricultural production, traditional methods face challenges due to the high costs associated with data collection and the scarcity of samples. Few-shot plant disease identification tasks, which are based on transfer learning, can learn feature representations from a small amount of data; however, most of these methods require pretraining within the relevant domain. Recently, foundation models have demonstrated excellent performance in zero-shot and few-shot learning scenarios. In this study, we explore the potential of foundation models in plant disease recognition by proposing an efficient few-shot plant disease recognition model (PlantCaFo) based on foundation models. This model operates on an end-to-end network structure, integrating prior knowledge from multiple pretraining models. Specifically, we design a lightweight dilated contextual adapter (DCon-Adapter) to learn new knowledge from training data and use a weight decomposition matrix (WDM) to update the text weights. We test the proposed model on a public dataset, PlantVillage, and show that the model achieves an accuracy of 93.53 ​% in a \"38-way 16-shot\" setting. In addition, we conduct experiments on images collected from natural environments (Cassava dataset), achieving an accuracy improvement of 6.80 ​% over the baseline. To validate the model's generalization performance, we prepare an out-of-distribution dataset with 21 categories, and our model notably increases the accuracy of this dataset. Extensive experiments demonstrate that our model exhibits superior performance over other models in few-shot plant disease identification.</p>","PeriodicalId":20318,"journal":{"name":"Plant Phenomics","volume":"7 1","pages":"100024"},"PeriodicalIF":6.4,"publicationDate":"2025-02-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12709961/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145781940","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Plant Phenomics
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1