Pub Date : 2025-03-14eCollection Date: 2025-03-01DOI: 10.1016/j.plaphe.2025.100027
Alireza Nakhforoosh, Emil Hallin, Zongyu Wang, Micheal Hogue, Hillary H Mehlhorn, Grant Tingstad, Leon Kochian
The real-time and non-invasive visualization and quantification of symbiotic nitrogen fixation (SNF) in nodulated roots of soybean plants using Positron Emission Tomography (PET) imaging, coupled with the application of [13N]N2 gas as a PET radiotracer, has been explored in only a few studies. In these studies, [13N]N2 was delivered to nodulated soybean roots suspended in air within gas-tight acrylic boxes, followed by two-dimensional (2D) PET imaging to visualize the assimilated [13N]N2 in the air-suspended root nodules. In this paper, we introduce the In-Media Plant PET Root Imaging System (IMP2RIS), a novel gas delivery system designed and constructed in-house. Unlike the previous methods, IMP2RIS allows for non-intrusive delivery and exposure of [13N]N2 gas to the nodulated roots of soybean plants grown in a clay-rich, soil-like and visually opaque growth medium. This advancement enabled in-soil, three-dimensional (3D) visualization of SNF in soybean root nodules using Sofie, a preclinical PET scanner. Equipped with automated controls, IMP2RIS ensures ease of operation and operator safety during the [13N]N2 delivery process. We describe the components and functionalities of IMP2RIS, supported by experimental results showcasing its successful application in efficient delivery and exposure of [13N]N2 gas to nodulated roots of three soybean plant cultivars that vary in rates of N2 fixation. The in-soil quantitative PET imaging of SNF, aided by IMP2RIS, holds promise for enhancing the integration of SNF as a functional phenotypic trait into breeding programs, aiming to enhance SNF efficiency by identifying breeding materials with high SNF capacities.
{"title":"IMP<sup>2</sup>RIS, an automated plant root PET radiotracer gas delivery system for in-soil visualization of symbiotic N<sub>2</sub> fixation in nodulated roots of soybean plants via PET imaging.","authors":"Alireza Nakhforoosh, Emil Hallin, Zongyu Wang, Micheal Hogue, Hillary H Mehlhorn, Grant Tingstad, Leon Kochian","doi":"10.1016/j.plaphe.2025.100027","DOIUrl":"10.1016/j.plaphe.2025.100027","url":null,"abstract":"<p><p>The real-time and non-invasive visualization and quantification of symbiotic nitrogen fixation (SNF) in nodulated roots of soybean plants using Positron Emission Tomography (PET) imaging, coupled with the application of [<sup>13</sup>N]N<sub>2</sub> gas as a PET radiotracer, has been explored in only a few studies. In these studies, [<sup>13</sup>N]N<sub>2</sub> was delivered to nodulated soybean roots suspended in air within gas-tight acrylic boxes, followed by two-dimensional (2D) PET imaging to visualize the assimilated [<sup>13</sup>N]N<sub>2</sub> in the air-suspended root nodules. In this paper, we introduce the In-Media Plant PET Root Imaging System (IMP<sup>2</sup>RIS), a novel gas delivery system designed and constructed in-house. Unlike the previous methods, IMP<sup>2</sup>RIS allows for non-intrusive delivery and exposure of [<sup>13</sup>N]N<sub>2</sub> gas to the nodulated roots of soybean plants grown in a clay-rich, soil-like and visually opaque growth medium. This advancement enabled in-soil, three-dimensional (3D) visualization of SNF in soybean root nodules using Sofie, a preclinical PET scanner. Equipped with automated controls, IMP<sup>2</sup>RIS ensures ease of operation and operator safety during the [<sup>13</sup>N]N<sub>2</sub> delivery process. We describe the components and functionalities of IMP<sup>2</sup>RIS, supported by experimental results showcasing its successful application in efficient delivery and exposure of [<sup>13</sup>N]N<sub>2</sub> gas to nodulated roots of three soybean plant cultivars that vary in rates of N<sub>2</sub> fixation. The in-soil quantitative PET imaging of SNF, aided by IMP<sup>2</sup>RIS, holds promise for enhancing the integration of SNF as a functional phenotypic trait into breeding programs, aiming to enhance SNF efficiency by identifying breeding materials with high SNF capacities.</p>","PeriodicalId":20318,"journal":{"name":"Plant Phenomics","volume":"7 1","pages":"100027"},"PeriodicalIF":6.4,"publicationDate":"2025-03-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12709944/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145782571","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-03-08eCollection Date: 2025-03-01DOI: 10.1016/j.plaphe.2025.100023
Jensina M Davis, Mathieu Gaillard, Michael C Tross, Nikee Shrestha, Ian Ostermann, Ryleigh J Grove, Bosheng Li, Bedrich Benes, James C Schnable
Differences in canopy architecture play a role in determining both the light and water use efficiency. Canopy architecture is determined by several component traits, including leaf length, width, number, angle, and phyllotaxy. Phyllotaxy may be among the most difficult of the leaf canopy traits to measure accurately across large numbers of individual plants. As a result, in simulations of the leaf canopies of grain crops such as maize and sorghum, this trait is frequently approximated as alternating 180° angles between sequential leaves. We explore the feasibility of extracting direct measurements of the phyllotaxy of sequential leaves from 3D reconstructions of individual sorghum plants generated from 2D calibrated images and test the assumption of consistently alternating phyllotaxy across a diverse set of sorghum genotypes. Using a voxel-carving-based approach, we generate 3D reconstructions from multiple calibrated 2D images of 366 sorghum plants representing 236 sorghum genotypes from the sorghum association panel. The correlation between automated and manual measurements of phyllotaxy is only modestly lower than the correlation between manual measurements of phyllotaxy generated by two different individuals. Automated phyllotaxy measurements exhibited a repeatability of R2 = 0.41 across imaging timepoints separated by a period of two days. A resampling based genome wide association study (GWAS) identified several putative genetic associations with lower-canopy phyllotaxy in sorghum. This study demonstrates the potential of 3D reconstruction to enable both quantitative genetic investigation and breeding for phyllotaxy in sorghum and other grain crops with similar plant architectures.
{"title":"3D reconstruction enables high-throughput phenotyping and quantitative genetic analysis of phyllotaxy.","authors":"Jensina M Davis, Mathieu Gaillard, Michael C Tross, Nikee Shrestha, Ian Ostermann, Ryleigh J Grove, Bosheng Li, Bedrich Benes, James C Schnable","doi":"10.1016/j.plaphe.2025.100023","DOIUrl":"10.1016/j.plaphe.2025.100023","url":null,"abstract":"<p><p>Differences in canopy architecture play a role in determining both the light and water use efficiency. Canopy architecture is determined by several component traits, including leaf length, width, number, angle, and phyllotaxy. Phyllotaxy may be among the most difficult of the leaf canopy traits to measure accurately across large numbers of individual plants. As a result, in simulations of the leaf canopies of grain crops such as maize and sorghum, this trait is frequently approximated as alternating 180° angles between sequential leaves. We explore the feasibility of extracting direct measurements of the phyllotaxy of sequential leaves from 3D reconstructions of individual sorghum plants generated from 2D calibrated images and test the assumption of consistently alternating phyllotaxy across a diverse set of sorghum genotypes. Using a voxel-carving-based approach, we generate 3D reconstructions from multiple calibrated 2D images of 366 sorghum plants representing 236 sorghum genotypes from the sorghum association panel. The correlation between automated and manual measurements of phyllotaxy is only modestly lower than the correlation between manual measurements of phyllotaxy generated by two different individuals. Automated phyllotaxy measurements exhibited a repeatability of <i>R</i> <sup>2</sup> = 0.41 across imaging timepoints separated by a period of two days. A resampling based genome wide association study (GWAS) identified several putative genetic associations with lower-canopy phyllotaxy in sorghum. This study demonstrates the potential of 3D reconstruction to enable both quantitative genetic investigation and breeding for phyllotaxy in sorghum and other grain crops with similar plant architectures.</p>","PeriodicalId":20318,"journal":{"name":"Plant Phenomics","volume":"7 1","pages":"100023"},"PeriodicalIF":6.4,"publicationDate":"2025-03-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12710043/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145782271","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-03-06eCollection Date: 2025-03-01DOI: 10.1016/j.plaphe.2025.100015
Chengjian Zhang, Zhibo Chen, Riqiang Chen, Wenjie Zhang, Dan Zhao, Guijun Yang, Bo Xu, Haikuan Feng, Hao Yang
Accurate monitoring and spatial distribution of the leaf chlorophyll content (LCC) and canopy chlorophyll content (CCC) of individual apple trees are highly important for the effective management of individual plants and the promotion of the construction of modern smart orchards. However, the estimation of LCC and CCC is affected by shadows caused by canopy structure and observation geometry. In this study, we resolved the response relationship between individual apple tree crown spectra and shadows through a three-dimensional radiative transfer model (3D RTM) and unmanned aerial vehicle (UAV) multispectral images, assessed the resistance of a series of vegetation indices (VIs) to shadows and developed a hybrid inversion model that is resistant to shadow interference. The results revealed that (1) the proportion of individual tree canopy shadows exhibited a parabolic trend with time, with a minimum occurring at noon. Correspondingly, the reflectance in the visible band decreased with increasing canopy shadow ratio and reached a maximum value at noon, whereas the pattern of change in the reflectance in the near-infrared band was opposite that in the visible band. (2) The accuracy of chlorophyll content estimation varies among different VIs at different canopy shadow ratios. The top five VIs that are most resistant to changes in canopy shadow ratios are the NDVI-RE, Cire, Cigreen, TVI, and GNDVI. (3) For the constructed 3D RTM + GPR hybrid inversion model, only four VIs, namely, NDVI-RE, Cire, Cigreen, and TVI, need to be input to achieve the best inversion accuracy. (4) Both the LCC and the CCC of individual trees had good validation accuracy (LCC: R2 = 0.775, RMSE = 6.86 μg/cm2, nRMSE = 12.24 %; CCC: R2 = 0.784, RMSE = 32.33 μg/cm2, and nRMSE = 14.49 %), and their distributions at orchard scales were characterized by considerable spatial heterogeneity. This study provides ideas for investigating the response between individual tree canopy shadows and spectra and offers a new strategy for minimizing the influence of shadow effects on the accurate estimation of chlorophyll content in individual apple trees.
{"title":"Retrieving the chlorophyll content of individual apple trees by reducing canopy shadow impact via a 3D radiative transfer model and UAV multispectral imagery.","authors":"Chengjian Zhang, Zhibo Chen, Riqiang Chen, Wenjie Zhang, Dan Zhao, Guijun Yang, Bo Xu, Haikuan Feng, Hao Yang","doi":"10.1016/j.plaphe.2025.100015","DOIUrl":"10.1016/j.plaphe.2025.100015","url":null,"abstract":"<p><p>Accurate monitoring and spatial distribution of the leaf chlorophyll content (LCC) and canopy chlorophyll content (CCC) of individual apple trees are highly important for the effective management of individual plants and the promotion of the construction of modern smart orchards. However, the estimation of LCC and CCC is affected by shadows caused by canopy structure and observation geometry. In this study, we resolved the response relationship between individual apple tree crown spectra and shadows through a three-dimensional radiative transfer model (3D RTM) and unmanned aerial vehicle (UAV) multispectral images, assessed the resistance of a series of vegetation indices (VIs) to shadows and developed a hybrid inversion model that is resistant to shadow interference. The results revealed that (1) the proportion of individual tree canopy shadows exhibited a parabolic trend with time, with a minimum occurring at noon. Correspondingly, the reflectance in the visible band decreased with increasing canopy shadow ratio and reached a maximum value at noon, whereas the pattern of change in the reflectance in the near-infrared band was opposite that in the visible band. (2) The accuracy of chlorophyll content estimation varies among different VIs at different canopy shadow ratios. The top five VIs that are most resistant to changes in canopy shadow ratios are the NDVI-RE, Cire, Cigreen, TVI, and GNDVI. (3) For the constructed 3D RTM + GPR hybrid inversion model, only four VIs, namely, NDVI-RE, Cire, Cigreen, and TVI, need to be input to achieve the best inversion accuracy. (4) Both the LCC and the CCC of individual trees had good validation accuracy (LCC: R<sup>2</sup> = 0.775, RMSE = 6.86 μg/cm<sup>2</sup>, nRMSE = 12.24 %; CCC: R<sup>2</sup> = 0.784, RMSE = 32.33 μg/cm<sup>2</sup>, and nRMSE = 14.49 %), and their distributions at orchard scales were characterized by considerable spatial heterogeneity. This study provides ideas for investigating the response between individual tree canopy shadows and spectra and offers a new strategy for minimizing the influence of shadow effects on the accurate estimation of chlorophyll content in individual apple trees.</p>","PeriodicalId":20318,"journal":{"name":"Plant Phenomics","volume":"7 1","pages":"100015"},"PeriodicalIF":6.4,"publicationDate":"2025-03-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12710017/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145781988","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-03-06eCollection Date: 2025-03-01DOI: 10.1016/j.plaphe.2025.100019
Siyue Wang, Yang Yang, Junwei Zeng, Limin Zhao, Haibin Wang, Sumei Chen, Weimin Fang, Fei Zhang, Jiangshuo Su, Fadi Chen
Waterlogging is a major stress that impacts the chrysanthemum industry. Large-scale germplasm screening for identifying waterlogging-tolerant resources in a quick and accurate manner is essential for developing new cultivars with improved waterlogging tolerance. To overcome this phenotyping bottleneck, consumer-grade digital cameras have been used to acquire the red-green-blue (RGB) images of 180 chrysanthemum cultivars and their wild relatives under waterlogging stress and well-watered conditions. A total of 103 image-based digital traits (i-traits), including 10 morphological i-traits and 93 texture i-traits, were extracted and systematically analyzed. Most of these i-traits presented high coefficients of variation (CVs) and broad-sense heritability (H2 ), with an average CV of 34.04 % and an average H2 of 0.93. We identified several novel texture i-traits associated with the hue (H) component, which strongly correlated with the traditional waterlogging tolerance index, the membership function value of waterlogging (MFVW) (R = 0.63-0.77). We further employed the random forest (RF) and gradient boosting tree (GBT) machine learning algorithms to predict aboveground biomass and MFVW on the basis of different i-trait datasets. The RF model achieved superior predictive performance, with a coefficient of determination (R2 ) of up to 0.88 for shoot weight and 0.86 for MFVW. Moreover, a subset of the top 13 most important i-traits could accurately predict MFVW (R2 > 0.80) via the cross-validation method. A total of 10 highly tolerant resources were selected by traditional and RGB-based evaluation, and 50 % belonged to Artemisia. Our findings confirmed that RGB-based technology provides a promising novel approach for quantifying waterlogging response that contributes to future breeding programs and genetic dissection for waterlogging tolerance.
{"title":"RGB imaging-based evaluation of waterlogging tolerance in cultivated and wild chrysanthemums.","authors":"Siyue Wang, Yang Yang, Junwei Zeng, Limin Zhao, Haibin Wang, Sumei Chen, Weimin Fang, Fei Zhang, Jiangshuo Su, Fadi Chen","doi":"10.1016/j.plaphe.2025.100019","DOIUrl":"10.1016/j.plaphe.2025.100019","url":null,"abstract":"<p><p>Waterlogging is a major stress that impacts the chrysanthemum industry. Large-scale germplasm screening for identifying waterlogging-tolerant resources in a quick and accurate manner is essential for developing new cultivars with improved waterlogging tolerance. To overcome this phenotyping bottleneck, consumer-grade digital cameras have been used to acquire the red-green-blue (RGB) images of 180 chrysanthemum cultivars and their wild relatives under waterlogging stress and well-watered conditions. A total of 103 image-based digital traits (i-traits), including 10 morphological i-traits and 93 texture i-traits, were extracted and systematically analyzed. Most of these i-traits presented high coefficients of variation (<i>CVs</i>) and broad-sense heritability (<i>H</i> <sup><i>2</i></sup> ), with an average <i>CV</i> of 34.04 % and an average <i>H</i> <sup><i>2</i></sup> of 0.93. We identified several novel texture i-traits associated with the hue (H) component, which strongly correlated with the traditional waterlogging tolerance index, the membership function value of waterlogging (MFVW) (<i>R</i> = 0.63-0.77). We further employed the random forest (RF) and gradient boosting tree (GBT) machine learning algorithms to predict aboveground biomass and MFVW on the basis of different i-trait datasets. The RF model achieved superior predictive performance, with a coefficient of determination (<i>R</i> <sup><i>2</i></sup> ) of up to 0.88 for shoot weight and 0.86 for MFVW. Moreover, a subset of the top 13 most important i-traits could accurately predict MFVW (<i>R</i> <sup><i>2</i></sup> > 0.80) via the cross-validation method. A total of 10 highly tolerant resources were selected by traditional and RGB-based evaluation, and 50 % belonged to <i>Artemisia</i>. Our findings confirmed that RGB-based technology provides a promising novel approach for quantifying waterlogging response that contributes to future breeding programs and genetic dissection for waterlogging tolerance.</p>","PeriodicalId":20318,"journal":{"name":"Plant Phenomics","volume":"7 1","pages":"100019"},"PeriodicalIF":6.4,"publicationDate":"2025-03-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12709946/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145781975","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-03-05eCollection Date: 2025-03-01DOI: 10.1016/j.plaphe.2025.100026
Liqiang Fan, Jiajie Yang, Xuwen Wang, Zhao Liu, Bowei Xu, Li Liu, Chenxu Gao, Xiantao Ai, Fuguang Li, Lei Gao, Yu Yu, Zuoren Yang
Plant height (PH) is a key agronomic trait influencing plant architecture. Suitable PH values for cotton are important for lodging resistance, high planting density, and mechanized harvesting, making it crucial to elucidate the mechanisms of the genetic regulation of PH. However, traditional field PH phenotyping largely relies on manual measurements, limiting its large-scale application. In this study, a high-throughput phenotyping platform based on UAV-mounted RGB and light detection and ranging (LiDAR) was developed to efficiently and accurately obtain time series PHs of 419 cotton accessions in the field. Different strategies were used to extract PH values from two sets of sensor data, and the extracted values were used to train using linear regression and machine learning methods to obtain PH predictions. These predictions were consistent with manual measurements of the PH for the LiDAR (R2 = 0.934) and RGB (R2 = 0.914) data. The predicted PH values were used for GWAS analysis, and 34 PH-related genes, two of which have been demonstrated to regulate PH in cotton, namely, GhPH1 and GhUBP15, were identified. We further identified significant differences in the expression of a new gene named GhPH_UAV1 in the stems of the G. hirsutum cultivar ZM24 harvested on the 15th, 35th, and 70th days after sowing compared with those from a dwarf mutant (pag1), which presented shortened stem and internode phenotypes. The overexpression of GhPH_UAV1 significantly promoted cotton stem development, whereas its knockout by CRISPR-Cas9 dramatically inhibited stem growth, suggesting that GhPH_UAV1 plays a positive regulatory role in cotton PH. This field-scale high-throughput phenotype monitoring platform significantly improves the ability to obtain high-quality phenotypic data from large populations, which helps overcome the imbalance between massive genotypic data and the shortage of field phenotypic data and facilitates the integration of genotype and phenotype research for crop improvement.
{"title":"Combining UAV multisensor field phenotyping and genome-wide association studies to reveal the genetic basis of plant height in cotton (<i>Gossypium hirsutum</i>).","authors":"Liqiang Fan, Jiajie Yang, Xuwen Wang, Zhao Liu, Bowei Xu, Li Liu, Chenxu Gao, Xiantao Ai, Fuguang Li, Lei Gao, Yu Yu, Zuoren Yang","doi":"10.1016/j.plaphe.2025.100026","DOIUrl":"10.1016/j.plaphe.2025.100026","url":null,"abstract":"<p><p>Plant height (PH) is a key agronomic trait influencing plant architecture. Suitable PH values for cotton are important for lodging resistance, high planting density, and mechanized harvesting, making it crucial to elucidate the mechanisms of the genetic regulation of PH. However, traditional field PH phenotyping largely relies on manual measurements, limiting its large-scale application. In this study, a high-throughput phenotyping platform based on UAV-mounted RGB and light detection and ranging (LiDAR) was developed to efficiently and accurately obtain time series PHs of 419 cotton accessions in the field. Different strategies were used to extract PH values from two sets of sensor data, and the extracted values were used to train using linear regression and machine learning methods to obtain PH predictions. These predictions were consistent with manual measurements of the PH for the LiDAR (R<sup>2</sup> = 0.934) and RGB (R<sup>2</sup> = 0.914) data. The predicted PH values were used for GWAS analysis, and 34 PH-related genes, two of which have been demonstrated to regulate PH in cotton, namely, <i>GhPH1</i> and <i>GhUBP15</i>, were identified. We further identified significant differences in the expression of a new gene named <i>GhPH_UAV1</i> in the stems of the <i>G. hirsutum</i> cultivar ZM24 harvested on the 15th, 35th, and 70th days after sowing compared with those from a dwarf mutant (<i>pag1</i>), which presented shortened stem and internode phenotypes. The overexpression of <i>GhPH_UAV1</i> significantly promoted cotton stem development, whereas its knockout by CRISPR-Cas9 dramatically inhibited stem growth, suggesting that <i>GhPH_UAV1</i> plays a positive regulatory role in cotton PH. This field-scale high-throughput phenotype monitoring platform significantly improves the ability to obtain high-quality phenotypic data from large populations, which helps overcome the imbalance between massive genotypic data and the shortage of field phenotypic data and facilitates the integration of genotype and phenotype research for crop improvement.</p>","PeriodicalId":20318,"journal":{"name":"Plant Phenomics","volume":"7 1","pages":"100026"},"PeriodicalIF":6.4,"publicationDate":"2025-03-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12710045/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145782375","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Segmentation of vegetation remote sensing images can minimize the interference of background, thus achieving efficient monitoring and analysis for vegetation information. The segmentation of vegetation poses a significant challenge due to the inherently complex environmental conditions. Currently, there is a growing trend of using spectral sensing combined with deep learning for field vegetation segmentation to cope with complex environments. However, two major constraints remain: the high cost of equipment required for field spectral data collection; the availability of field datasets is limited and data annotation is time-consuming and labor-intensive. To address these challenges, we propose a weakly supervised approach for field vegetation segmentation by using spectral reconstruction (SR) techniques as the foundation and drawing on the theory of vegetation index (VI). Specifically, to reduce the cost of data acquisition, we propose SRCNet and SRANet based on convolution and attention structure to reconstruct multispectral images of fields, respectively. Then, borrowing from the VI principle, we aggregate the reconstructed data to establish the connection of spectral bands, obtaining more salient vegetation information. Finally, we employ the adaptation strategy to segment the fused feature map using a weakly supervised method, which does not require manual labeling to obtain a field vegetation segmentation result. Our segmentation method can achieve a Mean Intersection over Union (MIoU) of 0.853 on real field datasets, which outperforms the existing methods. In addition, we have open-sourced a dataset of unmanned aerial vehicle (UAV) RGB-multispectral images, comprising 2358 pairs of samples, to improve the richness of remote sensing agricultural data. The code and data are available at https://github.com/GZU-SAMLab/VegSegment_SR, and http://sr-seg.samlab.cn/.
{"title":"Segmenting vegetation from UAV images via spectral reconstruction in complex field environments.","authors":"Zhixun Pei, Xingcai Wu, Xue Wu, Yuanyuan Xiao, Peijia Yu, Zhenran Gao, Qi Wang, Wei Guo","doi":"10.1016/j.plaphe.2025.100021","DOIUrl":"10.1016/j.plaphe.2025.100021","url":null,"abstract":"<p><p>Segmentation of vegetation remote sensing images can minimize the interference of background, thus achieving efficient monitoring and analysis for vegetation information. The segmentation of vegetation poses a significant challenge due to the inherently complex environmental conditions. Currently, there is a growing trend of using spectral sensing combined with deep learning for field vegetation segmentation to cope with complex environments. However, two major constraints remain: the high cost of equipment required for field spectral data collection; the availability of field datasets is limited and data annotation is time-consuming and labor-intensive. To address these challenges, we propose a weakly supervised approach for field vegetation segmentation by using spectral reconstruction (SR) techniques as the foundation and drawing on the theory of vegetation index (VI). Specifically, to reduce the cost of data acquisition, we propose SRCNet and SRANet based on convolution and attention structure to reconstruct multispectral images of fields, respectively. Then, borrowing from the VI principle, we aggregate the reconstructed data to establish the connection of spectral bands, obtaining more salient vegetation information. Finally, we employ the adaptation strategy to segment the fused feature map using a weakly supervised method, which does not require manual labeling to obtain a field vegetation segmentation result. Our segmentation method can achieve a Mean Intersection over Union (MIoU) of 0.853 on real field datasets, which outperforms the existing methods. In addition, we have open-sourced a dataset of unmanned aerial vehicle (UAV) RGB-multispectral images, comprising 2358 pairs of samples, to improve the richness of remote sensing agricultural data. The code and data are available at https://github.com/GZU-SAMLab/VegSegment_SR, and http://sr-seg.samlab.cn/.</p>","PeriodicalId":20318,"journal":{"name":"Plant Phenomics","volume":"7 1","pages":"100021"},"PeriodicalIF":6.4,"publicationDate":"2025-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12709948/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145781947","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-03-01DOI: 10.1016/j.plaphe.2025.100025
Zhiyan Tang, Jiandong Sun, Yunlu Tian, Jiexiong Xu, Weikun Zhao, Gang Jiang, Jiaqi Deng, Xiangchao Gan
Machine learning models for crop image analysis and phenomics are highly important for precision agriculture and breeding and have been the subject of intensive research. However, the lack of publicly available high-quality image datasets with detailed annotations has severely hindered the development of these models. In this work, we present a comprehensive multicultivar and multiview rice plant image dataset (CVRP) created from 231 landraces and 50 modern cultivars grown under dense planting in paddy fields. The dataset includes images capturing rice plants in their natural environment, as well as indoor images focusing specifically on panicles, allowing for a detailed investigation of cultivar-specific differences. A semiautomatic annotation process using deep learning models was designed for annotations, followed by rigorous manual curation. We demonstrated the utility of the CVRP by evaluating the performance of four state-of-the-art (SOTA) semantic segmentation models. We also conducted 3D plant reconstruction with organ segmentation via images and annotations. The database not only facilitates general-purpose image-based panicle identification and segmentation but also provides valuable resources for challenging tasks such as automatic rice cultivar identification, panicle and grain counting, and 3D plant reconstruction. The database and the model for image annotation are available at https://bic.njau.edu.cn/CVRP.html.
{"title":"CVRP: A rice image dataset with high-quality annotations for image segmentation and plant phenomics research.","authors":"Zhiyan Tang, Jiandong Sun, Yunlu Tian, Jiexiong Xu, Weikun Zhao, Gang Jiang, Jiaqi Deng, Xiangchao Gan","doi":"10.1016/j.plaphe.2025.100025","DOIUrl":"10.1016/j.plaphe.2025.100025","url":null,"abstract":"<p><p>Machine learning models for crop image analysis and phenomics are highly important for precision agriculture and breeding and have been the subject of intensive research. However, the lack of publicly available high-quality image datasets with detailed annotations has severely hindered the development of these models. In this work, we present a comprehensive multicultivar and multiview rice plant image dataset (CVRP) created from 231 landraces and 50 modern cultivars grown under dense planting in paddy fields. The dataset includes images capturing rice plants in their natural environment, as well as indoor images focusing specifically on panicles, allowing for a detailed investigation of cultivar-specific differences. A semiautomatic annotation process using deep learning models was designed for annotations, followed by rigorous manual curation. We demonstrated the utility of the CVRP by evaluating the performance of four state-of-the-art (SOTA) semantic segmentation models. We also conducted 3D plant reconstruction with organ segmentation via images and annotations. The database not only facilitates general-purpose image-based panicle identification and segmentation but also provides valuable resources for challenging tasks such as automatic rice cultivar identification, panicle and grain counting, and 3D plant reconstruction. The database and the model for image annotation are available at https://bic.njau.edu.cn/CVRP.html.</p>","PeriodicalId":20318,"journal":{"name":"Plant Phenomics","volume":"7 1","pages":"100025"},"PeriodicalIF":6.4,"publicationDate":"2025-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12709888/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145782427","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Water stress is a crucial environmental factor that impacts the growth and yield of rice. Complex field microclimates and fluctuating water conditions pose a considerable challenge in accurately evaluating water stress. Measurement of a particular crop trait is not sufficient for accurate evaluation of the effects of complex water stress. Four comprehensive indicators were introduced in this research, including canopy chlorophyll content (CCC) and canopy equivalent water (CEW). The response of the canopy-specific traits to different types of water stress was identified through individual plant experiments. A hybrid method integrating the PROSAIL radiative transfer model and multidimensional imaging data to retrieve these traits. The synthetic dataset generated by PROSAIL was utilized as prior knowledge for developing a pre-trained machine learning model. Subsequently, reflectance separated from hyperspectral images and phenotypic indicators extracted from front-view images were innovatively united to retrieve water stress-related traits. The results demonstrated that the hybrid method exhibited improved stability and accuracy of CCC (R = 0.7920, RMSE = 24.971 μg cm-2) and CEW (R = 0.8250, RMSE = 0.0075 cm) compared to both data-driven and physical inversion modeling methods. Overall, a robust and accurate method is proposed for assessing water stress in rice using a combination of radiative transfer modeling and multidimensional image-based data.
{"title":"A hybrid method for water stress evaluation of rice with the radiative transfer model and multidimensional imaging.","authors":"Yufan Zhang, Xiuliang Jin, Liangsheng Shi, Yu Wang, Han Qiao, Yuanyuan Zha","doi":"10.1016/j.plaphe.2025.100016","DOIUrl":"10.1016/j.plaphe.2025.100016","url":null,"abstract":"<p><p>Water stress is a crucial environmental factor that impacts the growth and yield of rice. Complex field microclimates and fluctuating water conditions pose a considerable challenge in accurately evaluating water stress. Measurement of a particular crop trait is not sufficient for accurate evaluation of the effects of complex water stress. Four comprehensive indicators were introduced in this research, including canopy chlorophyll content (CCC) and canopy equivalent water (CEW). The response of the canopy-specific traits to different types of water stress was identified through individual plant experiments. A hybrid method integrating the PROSAIL radiative transfer model and multidimensional imaging data to retrieve these traits. The synthetic dataset generated by PROSAIL was utilized as prior knowledge for developing a pre-trained machine learning model. Subsequently, reflectance separated from hyperspectral images and phenotypic indicators extracted from front-view images were innovatively united to retrieve water stress-related traits. The results demonstrated that the hybrid method exhibited improved stability and accuracy of CCC (R = 0.7920, RMSE = 24.971 μg cm<sup>-2</sup>) and CEW (R = 0.8250, RMSE = 0.0075 cm) compared to both data-driven and physical inversion modeling methods. Overall, a robust and accurate method is proposed for assessing water stress in rice using a combination of radiative transfer modeling and multidimensional image-based data.</p>","PeriodicalId":20318,"journal":{"name":"Plant Phenomics","volume":"7 1","pages":"100016"},"PeriodicalIF":6.4,"publicationDate":"2025-02-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12709993/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145782363","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-02-28eCollection Date: 2025-03-01DOI: 10.1016/j.plaphe.2025.100022
Juan Wang, Si Yang, Chuanyu Wang, Weiliang Wen, Ying Zhang, Gui Liu, Jingyi Li, Xinyu Guo, Chunjiang Zhao
Identifying and segmenting the vitreous and starchy endosperm of maize kernels is essential for texture analysis. However, the complex internal structure of maize kernels presents several challenges. In CT (computed tomography) images, the pixel intensity differences between the vitreous and starchy endosperm regions in maize kernel CT images are not distinct, potentially leading to low segmentation accuracy or oversegmentation. Moreover, the blurred edges between the vitreous and starchy endosperm make segmentation difficult, often resulting in jagged segmentation outcomes. We propose a deep learning-based CT image analysis pipeline to examine the internal structure of maize seeds. First, CT images are acquired using a multislice CT scanner. To improve the efficiency of maize kernel CT imaging, a batch scanning method is used. Individual kernels are accurately segmented from batch-scanned CT images using the Canny algorithm. Second, we modify the conventional architecture for high-quality segmentation of the vitreous and starchy endosperm in maize kernels. The conventional U-Net is modified by integrating the CBAM (convolutional block attention module) mechanism in the encoder and the SE (squeeze-and-excitation attention) mechanism in the decoder, as well as by using the focal-Tversky loss function instead of the Dice loss, and the boundary smoothing term is weighted as an additional loss term, named CSFTU-Net. The experimental results show that the CSFTU-Net model significantly improves the ability of segmenting vitreous and starchy endosperm. Finally, a segmented mask-based method is proposed to extract phenotype parameters of maize kernel texture, including the volume of the kernel (V), volume of the vitreous endosperm (VV), volume of starchy endosperm (SV), and ratios over their respective total kernel volumes (VV/V and SV/V). The proposed pipeline facilitates the nondestructive quantification of the internal structure of maize kernels, offering valuable insights for maize breeding and processing.
{"title":"A deep learning-based micro-CT image analysis pipeline for nondestructive quantification of the maize kernel internal structure.","authors":"Juan Wang, Si Yang, Chuanyu Wang, Weiliang Wen, Ying Zhang, Gui Liu, Jingyi Li, Xinyu Guo, Chunjiang Zhao","doi":"10.1016/j.plaphe.2025.100022","DOIUrl":"10.1016/j.plaphe.2025.100022","url":null,"abstract":"<p><p>Identifying and segmenting the vitreous and starchy endosperm of maize kernels is essential for texture analysis. However, the complex internal structure of maize kernels presents several challenges. In CT (computed tomography) images, the pixel intensity differences between the vitreous and starchy endosperm regions in maize kernel CT images are not distinct, potentially leading to low segmentation accuracy or oversegmentation. Moreover, the blurred edges between the vitreous and starchy endosperm make segmentation difficult, often resulting in jagged segmentation outcomes. We propose a deep learning-based CT image analysis pipeline to examine the internal structure of maize seeds. First, CT images are acquired using a multislice CT scanner. To improve the efficiency of maize kernel CT imaging, a batch scanning method is used. Individual kernels are accurately segmented from batch-scanned CT images using the Canny algorithm. Second, we modify the conventional architecture for high-quality segmentation of the vitreous and starchy endosperm in maize kernels. The conventional U-Net is modified by integrating the CBAM (convolutional block attention module) mechanism in the encoder and the SE (squeeze-and-excitation attention) mechanism in the decoder, as well as by using the focal-Tversky loss function instead of the Dice loss, and the boundary smoothing term is weighted as an additional loss term, named CSFTU-Net. The experimental results show that the CSFTU-Net model significantly improves the ability of segmenting vitreous and starchy endosperm. Finally, a segmented mask-based method is proposed to extract phenotype parameters of maize kernel texture, including the volume of the kernel (V), volume of the vitreous endosperm (VV), volume of starchy endosperm (SV), and ratios over their respective total kernel volumes (VV/V and SV/V). The proposed pipeline facilitates the nondestructive quantification of the internal structure of maize kernels, offering valuable insights for maize breeding and processing.</p>","PeriodicalId":20318,"journal":{"name":"Plant Phenomics","volume":"7 1","pages":"100022"},"PeriodicalIF":6.4,"publicationDate":"2025-02-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12709881/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145782384","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-02-28eCollection Date: 2025-03-01DOI: 10.1016/j.plaphe.2025.100024
Xue Jiang, Jiashi Wang, Kai Xie, Chenxi Cui, Aobo Du, Xianglong Shi, Wanneng Yang, Ruifang Zhai
Although plant disease recognition is highly important in agricultural production, traditional methods face challenges due to the high costs associated with data collection and the scarcity of samples. Few-shot plant disease identification tasks, which are based on transfer learning, can learn feature representations from a small amount of data; however, most of these methods require pretraining within the relevant domain. Recently, foundation models have demonstrated excellent performance in zero-shot and few-shot learning scenarios. In this study, we explore the potential of foundation models in plant disease recognition by proposing an efficient few-shot plant disease recognition model (PlantCaFo) based on foundation models. This model operates on an end-to-end network structure, integrating prior knowledge from multiple pretraining models. Specifically, we design a lightweight dilated contextual adapter (DCon-Adapter) to learn new knowledge from training data and use a weight decomposition matrix (WDM) to update the text weights. We test the proposed model on a public dataset, PlantVillage, and show that the model achieves an accuracy of 93.53 % in a "38-way 16-shot" setting. In addition, we conduct experiments on images collected from natural environments (Cassava dataset), achieving an accuracy improvement of 6.80 % over the baseline. To validate the model's generalization performance, we prepare an out-of-distribution dataset with 21 categories, and our model notably increases the accuracy of this dataset. Extensive experiments demonstrate that our model exhibits superior performance over other models in few-shot plant disease identification.
{"title":"PlantCaFo: An efficient few-shot plant disease recognition method based on foundation models.","authors":"Xue Jiang, Jiashi Wang, Kai Xie, Chenxi Cui, Aobo Du, Xianglong Shi, Wanneng Yang, Ruifang Zhai","doi":"10.1016/j.plaphe.2025.100024","DOIUrl":"10.1016/j.plaphe.2025.100024","url":null,"abstract":"<p><p>Although plant disease recognition is highly important in agricultural production, traditional methods face challenges due to the high costs associated with data collection and the scarcity of samples. Few-shot plant disease identification tasks, which are based on transfer learning, can learn feature representations from a small amount of data; however, most of these methods require pretraining within the relevant domain. Recently, foundation models have demonstrated excellent performance in zero-shot and few-shot learning scenarios. In this study, we explore the potential of foundation models in plant disease recognition by proposing an efficient few-shot plant disease recognition model (PlantCaFo) based on foundation models. This model operates on an end-to-end network structure, integrating prior knowledge from multiple pretraining models. Specifically, we design a lightweight dilated contextual adapter (DCon-Adapter) to learn new knowledge from training data and use a weight decomposition matrix (WDM) to update the text weights. We test the proposed model on a public dataset, PlantVillage, and show that the model achieves an accuracy of 93.53 % in a \"38-way 16-shot\" setting. In addition, we conduct experiments on images collected from natural environments (Cassava dataset), achieving an accuracy improvement of 6.80 % over the baseline. To validate the model's generalization performance, we prepare an out-of-distribution dataset with 21 categories, and our model notably increases the accuracy of this dataset. Extensive experiments demonstrate that our model exhibits superior performance over other models in few-shot plant disease identification.</p>","PeriodicalId":20318,"journal":{"name":"Plant Phenomics","volume":"7 1","pages":"100024"},"PeriodicalIF":6.4,"publicationDate":"2025-02-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12709961/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145781940","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}