Pub Date : 2024-12-19eCollection Date: 2024-01-01DOI: 10.34133/plantphenomics.0270
Jiawei Chen, Qing Li, Dong Jiang
The selection and promotion of high-yielding and nitrogen-efficient wheat varieties can reduce nitrogen fertilizer application while ensuring wheat yield and quality and contribute to the sustainable development of agriculture; thus, the mining and localization of nitrogen use efficiency (NUE) genes is particularly important, but the localization of NUE genes requires a large amount of phenotypic data support. In view of this, we propose the use of low-altitude aerial photography to acquire field images at a large scale, generate 3-dimensional (3D) point clouds and multispectral images of wheat plots, propose a wheat 3D plot segmentation dataset, quantify the plot canopy height via combination with PointNet++, and generate 4 nitrogen utilization-related vegetation indices via index calculations. Six height-related and 24 vegetation-index-related dynamic digital phenotypes were extracted from the digital phenotypes collected at different time points and fitted to generate dynamic curves. We applied height-derived dynamic numerical phenotypes to genome-wide association studies of 160 wheat cultivars (660,000 single-nucleotide polymorphisms) and found that we were able to locate reliable loci associated with height and NUE, some of which were consistent with published studies. Finally, dynamic phenotypes derived from plant indices can also be applied to genome-wide association studies and ultimately locate NUE- and growth-related loci. In conclusion, we believe that our work demonstrates valuable advances in 3D digital dynamic phenotyping for locating genes for NUE in wheat and provides breeders with accurate phenotypic data for the selection and breeding of nitrogen-efficient wheat varieties.
{"title":"From Images to Loci: Applying 3D Deep Learning to Enable Multivariate and Multitemporal Digital Phenotyping and Mapping the Genetics Underlying Nitrogen Use Efficiency in Wheat.","authors":"Jiawei Chen, Qing Li, Dong Jiang","doi":"10.34133/plantphenomics.0270","DOIUrl":"10.34133/plantphenomics.0270","url":null,"abstract":"<p><p>The selection and promotion of high-yielding and nitrogen-efficient wheat varieties can reduce nitrogen fertilizer application while ensuring wheat yield and quality and contribute to the sustainable development of agriculture; thus, the mining and localization of nitrogen use efficiency (NUE) genes is particularly important, but the localization of NUE genes requires a large amount of phenotypic data support. In view of this, we propose the use of low-altitude aerial photography to acquire field images at a large scale, generate 3-dimensional (3D) point clouds and multispectral images of wheat plots, propose a wheat 3D plot segmentation dataset, quantify the plot canopy height via combination with PointNet++, and generate 4 nitrogen utilization-related vegetation indices via index calculations. Six height-related and 24 vegetation-index-related dynamic digital phenotypes were extracted from the digital phenotypes collected at different time points and fitted to generate dynamic curves. We applied height-derived dynamic numerical phenotypes to genome-wide association studies of 160 wheat cultivars (660,000 single-nucleotide polymorphisms) and found that we were able to locate reliable loci associated with height and NUE, some of which were consistent with published studies. Finally, dynamic phenotypes derived from plant indices can also be applied to genome-wide association studies and ultimately locate NUE- and growth-related loci. In conclusion, we believe that our work demonstrates valuable advances in 3D digital dynamic phenotyping for locating genes for NUE in wheat and provides breeders with accurate phenotypic data for the selection and breeding of nitrogen-efficient wheat varieties.</p>","PeriodicalId":20318,"journal":{"name":"Plant Phenomics","volume":"6 ","pages":"0270"},"PeriodicalIF":7.6,"publicationDate":"2024-12-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11658601/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142865352","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In contemporary agriculture, experts develop preventative and remedial strategies for various disease stages in diverse crops. Decision-making regarding the stages of disease occurrence exceeds the capabilities of single-image tasks, such as image classification and object detection. Consequently, research now focuses on training visual question answering (VQA) models. However, existing studies concentrate on identifying disease species rather than formulating questions that encompass crucial multiattributes. Additionally, model performance is susceptible to the model structure and dataset biases. To address these challenges, we construct the informed-learning-guided VQA model of crop disease (ILCD). ILCD improves model performance by integrating coattention, a multimodal fusion model (MUTAN), and a bias-balancing (BiBa) strategy. To facilitate the investigation of various visual attributes of crop diseases and the determination of disease occurrence stages, we construct a new VQA dataset called the Crop Disease Multi-attribute VQA with Prior Knowledge (CDwPK-VQA). This dataset contains comprehensive information on various visual attributes such as shape, size, status, and color. We expand the dataset by integrating prior knowledge into CDwPK-VQA to address performance challenges. Comparative experiments are conducted by ILCD on the VQA-v2, VQA-CP v2, and CDwPK-VQA datasets, achieving accuracies of 68.90%, 49.75%, and 86.06%, respectively. Ablation experiments are conducted on CDwPK-VQA to evaluate the effectiveness of various modules, including coattention, MUTAN, and BiBa. These experiments demonstrate that ILCD exhibits the highest level of accuracy, performance, and value in the field of agriculture. The source codes can be accessed at https://github.com/SdustZYP/ILCD-master/tree/main.
{"title":"Informed-Learning-Guided Visual Question Answering Model of Crop Disease.","authors":"Yunpeng Zhao, Shansong Wang, Qingtian Zeng, Weijian Ni, Hua Duan, Nengfu Xie, Fengjin Xiao","doi":"10.34133/plantphenomics.0277","DOIUrl":"10.34133/plantphenomics.0277","url":null,"abstract":"<p><p>In contemporary agriculture, experts develop preventative and remedial strategies for various disease stages in diverse crops. Decision-making regarding the stages of disease occurrence exceeds the capabilities of single-image tasks, such as image classification and object detection. Consequently, research now focuses on training visual question answering (VQA) models. However, existing studies concentrate on identifying disease species rather than formulating questions that encompass crucial multiattributes. Additionally, model performance is susceptible to the model structure and dataset biases. To address these challenges, we construct the informed-learning-guided VQA model of crop disease (ILCD). ILCD improves model performance by integrating coattention, a multimodal fusion model (MUTAN), and a bias-balancing (BiBa) strategy. To facilitate the investigation of various visual attributes of crop diseases and the determination of disease occurrence stages, we construct a new VQA dataset called the Crop Disease Multi-attribute VQA with Prior Knowledge (CDwPK-VQA). This dataset contains comprehensive information on various visual attributes such as shape, size, status, and color. We expand the dataset by integrating prior knowledge into CDwPK-VQA to address performance challenges. Comparative experiments are conducted by ILCD on the VQA-v2, VQA-CP v2, and CDwPK-VQA datasets, achieving accuracies of 68.90%, 49.75%, and 86.06%, respectively. Ablation experiments are conducted on CDwPK-VQA to evaluate the effectiveness of various modules, including coattention, MUTAN, and BiBa. These experiments demonstrate that ILCD exhibits the highest level of accuracy, performance, and value in the field of agriculture. The source codes can be accessed at https://github.com/SdustZYP/ILCD-master/tree/main.</p>","PeriodicalId":20318,"journal":{"name":"Plant Phenomics","volume":"6 ","pages":"0277"},"PeriodicalIF":7.6,"publicationDate":"2024-12-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11649200/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142838643","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-12-13eCollection Date: 2024-01-01DOI: 10.34133/plantphenomics.0282
Kai Zhou, Saiting Qiu, Fuliang Cao, Guibin Wang, Lin Cao
Leaf nitrogen content (LNC) is a crucial indicator for assessing the nitrogen status of forest trees. The LNC retrieval can be achieved with the inversion of the PROSPECT-PRO model. However, the LNC retrieval from the commonly used leaf bidirectional reflectance factor (BRF) spectra remains challenging arising from the confounding effects of mesophyll structure, specular reflection, and other chemicals such as water. To address this issue, this study proposed an advanced BRF spectra-based approach, by alleviating the specular reflection effects and enhancing the leaf nitrogen absorption signals from Ginkgo trees and saplings, using 3 modified ratio indices (i.e., mPrior_800, mPrior_1131, and mPrior_1365) for the prior estimation of the Nstruct structure parameter, combined with different inversion methods (STANDARD, sPROCOSINE, PROSDM, and PROCWT). The results demonstrated that the prior Nstruct estimation strategy using modified ratio indices outperformed standard ratio indices or nonperforming prior Nstruct estimation, especially for mPrior_1131 and mPrior_1365 yielding reliable performance for most constituents. With the use of the optimal approaches (i.e., PROCWT_S3 combined with mPrior_1131 or mPrior_1365), our results also revealed that the optimal estimation of LNCarea (normalized root mean square error [NRMSE] = 12.94% to 14.49%) and LNCmass (NRMSE = 10.11% to 10.75%) can be further achieved, with the selected optimal wavebands concentrated in 5 common main domains of 1440 to 1539 nm, 1580 to 1639 nm, 1900 to 1999 nm, 2020 to 2099 nm, and 2120 to 2179 nm. These findings highlight marked potentials of the novel BRF spectra-based approach to improve the estimation of LNC and enhance the understanding of the impact of Nstruct prior estimation on the LNC retrieval in leaves of Ginkgo trees and saplings.
{"title":"Coupling PROSPECT with Prior Estimation of Leaf Structure to Improve the Retrieval of Leaf Nitrogen Content in <i>Ginkgo</i> from Bidirectional Reflectance Factor Spectra.","authors":"Kai Zhou, Saiting Qiu, Fuliang Cao, Guibin Wang, Lin Cao","doi":"10.34133/plantphenomics.0282","DOIUrl":"10.34133/plantphenomics.0282","url":null,"abstract":"<p><p>Leaf nitrogen content (LNC) is a crucial indicator for assessing the nitrogen status of forest trees. The LNC retrieval can be achieved with the inversion of the PROSPECT-PRO model. However, the LNC retrieval from the commonly used leaf bidirectional reflectance factor (BRF) spectra remains challenging arising from the confounding effects of mesophyll structure, specular reflection, and other chemicals such as water. To address this issue, this study proposed an advanced BRF spectra-based approach, by alleviating the specular reflection effects and enhancing the leaf nitrogen absorption signals from <i>Ginkgo</i> trees and saplings, using 3 modified ratio indices (i.e., mPrior_800, mPrior_1131, and mPrior_1365) for the prior estimation of the N<sub>struct</sub> structure parameter, combined with different inversion methods (STANDARD, sPROCOSINE, PROSDM, and PROCWT). The results demonstrated that the prior N<sub>struct</sub> estimation strategy using modified ratio indices outperformed standard ratio indices or nonperforming prior N<sub>struct</sub> estimation, especially for mPrior_1131 and mPrior_1365 yielding reliable performance for most constituents. With the use of the optimal approaches (i.e., PROCWT_S3 combined with mPrior_1131 or mPrior_1365), our results also revealed that the optimal estimation of LNC<sub>area</sub> (normalized root mean square error [NRMSE] = 12.94% to 14.49%) and LNC<sub>mass</sub> (NRMSE = 10.11% to 10.75%) can be further achieved, with the selected optimal wavebands concentrated in 5 common main domains of 1440 to 1539 nm, 1580 to 1639 nm, 1900 to 1999 nm, 2020 to 2099 nm, and 2120 to 2179 nm. These findings highlight marked potentials of the novel BRF spectra-based approach to improve the estimation of LNC and enhance the understanding of the impact of N<sub>struct</sub> prior estimation on the LNC retrieval in leaves of <i>Ginkgo</i> trees and saplings.</p>","PeriodicalId":20318,"journal":{"name":"Plant Phenomics","volume":"6 ","pages":"0282"},"PeriodicalIF":7.6,"publicationDate":"2024-12-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11641793/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142829579","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-12-11eCollection Date: 2024-01-01DOI: 10.34133/plantphenomics.0280
Lukas Fichtl, Daniel Leitner, Andrea Schnepf, Dominik Schmidt, Katrin Kahlen, Matthias Friedel
Understanding root system architecture (RSA) is essential for improving crop resilience to climate change, yet assessing root systems of woody perennials under field conditions remains a challenge. This study introduces a pipeline that combines field excavation, in situ 3-dimensional digitization, and transformation of RSA data into an interoperable format to analyze and model the growth and water uptake of grapevine rootstock genotypes. Eight root systems of each of 3 grapevine rootstock genotypes ("101-14", "SO4", and "Richter 110") were excavated and digitized 3 and 6 months after planting. We validated the precision of the digitization method, compared in situ and ex situ digitization, and assessed root loss during excavation. The digitized RSA data were converted to root system markup language (RSML) format and imported into the CPlantBox modeling framework, which we adapted to include a static initial root system and a probabilistic tropism function. We then parameterized it to simulate genotype-specific growth patterns of grapevine rootstocks and integrated root hydraulic properties to derive a standard uptake fraction (SUF) for each genotype. Results demonstrated that excavation and in situ digitization accurately reflected the spatial structure of root systems, despite some underestimation of fine root length. Our experiment revealed significant genotypic variations in RSA over time and provided new insights into genotype-specific water acquisition capabilities. Simulated RSA closely resembled the specific features of the field-grown and digitized root systems. This study provides a foundational methodology for future research aimed at utilizing RSA models to improve the sustainability and productivity of woody perennials under changing climatic conditions.
{"title":"A Field-to-Parameter Pipeline for Analyzing and Simulating Root System Architecture of Woody Perennials: Application to Grapevine Rootstocks.","authors":"Lukas Fichtl, Daniel Leitner, Andrea Schnepf, Dominik Schmidt, Katrin Kahlen, Matthias Friedel","doi":"10.34133/plantphenomics.0280","DOIUrl":"10.34133/plantphenomics.0280","url":null,"abstract":"<p><p>Understanding root system architecture (RSA) is essential for improving crop resilience to climate change, yet assessing root systems of woody perennials under field conditions remains a challenge. This study introduces a pipeline that combines field excavation, in situ 3-dimensional digitization, and transformation of RSA data into an interoperable format to analyze and model the growth and water uptake of grapevine rootstock genotypes. Eight root systems of each of 3 grapevine rootstock genotypes (\"101-14\", \"SO4\", and \"Richter 110\") were excavated and digitized 3 and 6 months after planting. We validated the precision of the digitization method, compared in situ and ex situ digitization, and assessed root loss during excavation. The digitized RSA data were converted to root system markup language (RSML) format and imported into the CPlantBox modeling framework, which we adapted to include a static initial root system and a probabilistic tropism function. We then parameterized it to simulate genotype-specific growth patterns of grapevine rootstocks and integrated root hydraulic properties to derive a standard uptake fraction (SUF) for each genotype. Results demonstrated that excavation and in situ digitization accurately reflected the spatial structure of root systems, despite some underestimation of fine root length. Our experiment revealed significant genotypic variations in RSA over time and provided new insights into genotype-specific water acquisition capabilities. Simulated RSA closely resembled the specific features of the field-grown and digitized root systems. This study provides a foundational methodology for future research aimed at utilizing RSA models to improve the sustainability and productivity of woody perennials under changing climatic conditions.</p>","PeriodicalId":20318,"journal":{"name":"Plant Phenomics","volume":"6 ","pages":"0280"},"PeriodicalIF":7.6,"publicationDate":"2024-12-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11633832/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142814123","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The rice panicle traits substantially influence grain yield, making them a primary target for rice phenotyping studies. However, most existing techniques are limited to controlled indoor environments and have difficulty in capturing the rice panicle traits under natural growth conditions. Here, we developed PanicleNeRF, a novel method that enables high-precision and low-cost reconstruction of rice panicle three-dimensional (3D) models in the field based on the video acquired by the smartphone. The proposed method combined the large model Segment Anything Model (SAM) and the small model You Only Look Once version 8 (YOLOv8) to achieve high-precision segmentation of rice panicle images. The neural radiance fields (NeRF) technique was then employed for 3D reconstruction using the images with 2D segmentation. Finally, the resulting point clouds are processed to successfully extract panicle traits. The results show that PanicleNeRF effectively addressed the 2D image segmentation task, achieving a mean F1 score of 86.9% and a mean Intersection over Union (IoU) of 79.8%, with nearly double the boundary overlap (BO) performance compared to YOLOv8. As for point cloud quality, PanicleNeRF significantly outperformed traditional SfM-MVS (structure-from-motion and multi-view stereo) methods, such as COLMAP and Metashape. The panicle length was then accurately extracted with the rRMSE of 2.94% for indica and 1.75% for japonica rice. The panicle volume estimated from 3D point clouds strongly correlated with the grain number (R2 = 0.85 for indica and 0.82 for japonica) and grain mass (0.80 for indica and 0.76 for japonica). This method provides a low-cost solution for high-throughput in-field phenotyping of rice panicles, accelerating the efficiency of rice breeding.
水稻穗部性状对水稻产量有重要影响,是水稻表型研究的主要目标。然而,现有的技术大多局限于受控的室内环境,难以捕捉自然生长条件下的水稻穗部性状。在这里,我们开发了PanicleNeRF,这是一种基于智能手机获取的视频在田间高精度和低成本重建水稻穗三维(3D)模型的新方法。该方法结合大模型Segment Anything model (SAM)和小模型You Only Look Once version 8 (YOLOv8),实现了水稻穗图像的高精度分割。然后利用神经辐射场(NeRF)技术对二维分割后的图像进行三维重建。最后,对得到的点云进行处理,成功提取出圆锥花序特征。结果表明,PanicleNeRF有效地解决了2D图像分割任务,平均F1得分为86.9%,平均交集比(IoU)为79.8%,边界重叠(BO)性能比YOLOv8提高了近一倍。在点云质量方面,PanicleNeRF显著优于传统的SfM-MVS (structure-from-motion and multi-view stereo)方法,如COLMAP和Metashape。结果表明,籼稻穗长和粳稻穗长的rRMSE分别为2.94%和1.75%。三维点云估算的穗体积与籼稻粒数(r2 = 0.85,粳稻为0.82)和籽粒质量(r2 = 0.80,粳稻为0.76)密切相关。该方法为水稻穗高通量田间表型分析提供了一种低成本的解决方案,提高了水稻育种效率。
{"title":"PanicleNeRF: Low-Cost, High-Precision In-Field Phenotyping of Rice Panicles with Smartphone.","authors":"Xin Yang, Xuqi Lu, Pengyao Xie, Ziyue Guo, Hui Fang, Haowei Fu, Xiaochun Hu, Zhenbiao Sun, Haiyan Cen","doi":"10.34133/plantphenomics.0279","DOIUrl":"10.34133/plantphenomics.0279","url":null,"abstract":"<p><p>The rice panicle traits substantially influence grain yield, making them a primary target for rice phenotyping studies. However, most existing techniques are limited to controlled indoor environments and have difficulty in capturing the rice panicle traits under natural growth conditions. Here, we developed PanicleNeRF, a novel method that enables high-precision and low-cost reconstruction of rice panicle three-dimensional (3D) models in the field based on the video acquired by the smartphone. The proposed method combined the large model Segment Anything Model (SAM) and the small model You Only Look Once version 8 (YOLOv8) to achieve high-precision segmentation of rice panicle images. The neural radiance fields (NeRF) technique was then employed for 3D reconstruction using the images with 2D segmentation. Finally, the resulting point clouds are processed to successfully extract panicle traits. The results show that PanicleNeRF effectively addressed the 2D image segmentation task, achieving a mean F1 score of 86.9% and a mean Intersection over Union (IoU) of 79.8%, with nearly double the boundary overlap (BO) performance compared to YOLOv8. As for point cloud quality, PanicleNeRF significantly outperformed traditional SfM-MVS (structure-from-motion and multi-view stereo) methods, such as COLMAP and Metashape. The panicle length was then accurately extracted with the rRMSE of 2.94% for <i>indica</i> and 1.75% for <i>japonica</i> rice. The panicle volume estimated from 3D point clouds strongly correlated with the grain number (<i>R</i> <sup>2</sup> = 0.85 for <i>indica</i> and 0.82 for <i>japonica</i>) and grain mass (0.80 for <i>indica</i> and 0.76 for <i>japonica</i>). This method provides a low-cost solution for high-throughput in-field phenotyping of rice panicles, accelerating the efficiency of rice breeding.</p>","PeriodicalId":20318,"journal":{"name":"Plant Phenomics","volume":"6 ","pages":"0279"},"PeriodicalIF":7.6,"publicationDate":"2024-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11617619/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142786754","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-12-05eCollection Date: 2024-01-01DOI: 10.34133/plantphenomics.0276
Yuanyuan Pan, Jingyu Li, Jiayi Zhang, Jiaoyang He, Zhihao Zhang, Xia Yao, Tao Cheng, Yan Zhu, Weixing Cao, Yongchao Tian
The accuracy of leaf nitrogen accumulation (LNA) estimation is often compromised by the vertical heterogeneity of crop nitrogen. In this study, an estimation model of LNA considering vertical heterogeneity of wheat was developed based on unmanned aerial vehicle (UAV) multispectral data and near-ground hyperspectral data, both collected at different view zenith angles (e.g., 0°, -30°, and -45°). Winter wheat plants were evenly divided into 3 layers from top to bottom, and LNA was obtained for the upper, middle, and lower leaf layers, as well as for various combinations of these layers (upper and middle, middle and lower, and the entire canopy, referred to as LNACanopy). The linear regression (LR) and random forest regression (RF) models were constructed to estimate the LNA for each individual leaf layer. Subsequently, models for estimating LNACanopy that considered the impact of vertical heterogeneity (namely, LR-LNASum and RF-LNASum) were established based on the relationships between LNACanopy and LNA in different leaf layers. Meanwhile, LNA models that did not consider the effect of vertical heterogeneity (LR-LNAnon and RF-LNAnon) were used for comparative validation. The validation datasets consisted of UAV-simulated data from hyperspectral reflectance and UAV-measured data. Results showed that LNASum models had markedly higher accuracy compared to LNAnon. The optimal scheme for estimating LNACanopy was the combination of the upper, middle, and lower layers based on the normalized difference red edge index. Among these models, RF-LNASum demonstrated higher accuracy than LR-LNASum, with a validation relative root mean square error of 19.3% and 17.8% for the UAV-measured and simulated dataset, respectively.
{"title":"Estimating Leaf Nitrogen Accumulation Considering Vertical Heterogeneity Using Multiangular Unmanned Aerial Vehicle Remote Sensing in Wheat.","authors":"Yuanyuan Pan, Jingyu Li, Jiayi Zhang, Jiaoyang He, Zhihao Zhang, Xia Yao, Tao Cheng, Yan Zhu, Weixing Cao, Yongchao Tian","doi":"10.34133/plantphenomics.0276","DOIUrl":"10.34133/plantphenomics.0276","url":null,"abstract":"<p><p>The accuracy of leaf nitrogen accumulation (LNA) estimation is often compromised by the vertical heterogeneity of crop nitrogen. In this study, an estimation model of LNA considering vertical heterogeneity of wheat was developed based on unmanned aerial vehicle (UAV) multispectral data and near-ground hyperspectral data, both collected at different view zenith angles (e.g., 0°, -30°, and -45°). Winter wheat plants were evenly divided into 3 layers from top to bottom, and LNA was obtained for the upper, middle, and lower leaf layers, as well as for various combinations of these layers (upper and middle, middle and lower, and the entire canopy, referred to as LNA<sub>Canopy</sub>). The linear regression (LR) and random forest regression (RF) models were constructed to estimate the LNA for each individual leaf layer. Subsequently, models for estimating LNA<sub>Canopy</sub> that considered the impact of vertical heterogeneity (namely, LR-LNA<sub>Sum</sub> and RF-LNA<sub>Sum</sub>) were established based on the relationships between LNA<sub>Canopy</sub> and LNA in different leaf layers. Meanwhile, LNA models that did not consider the effect of vertical heterogeneity (LR-LNA<sub>non</sub> and RF-LNA<sub>non</sub>) were used for comparative validation. The validation datasets consisted of UAV-simulated data from hyperspectral reflectance and UAV-measured data. Results showed that LNA<sub>Sum</sub> models had markedly higher accuracy compared to LNA<sub>non</sub>. The optimal scheme for estimating LNA<sub>Canopy</sub> was the combination of the upper, middle, and lower layers based on the normalized difference red edge index. Among these models, RF-LNA<sub>Sum</sub> demonstrated higher accuracy than LR-LNA<sub>Sum</sub>, with a validation relative root mean square error of 19.3% and 17.8% for the UAV-measured and simulated dataset, respectively.</p>","PeriodicalId":20318,"journal":{"name":"Plant Phenomics","volume":"6 ","pages":"0276"},"PeriodicalIF":7.6,"publicationDate":"2024-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11617620/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142786748","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Accurate counting of cereals crops, e.g., maize, rice, sorghum, and wheat, is crucial for estimating grain production and ensuring food security. However, existing methods for counting cereal crops focus predominantly on building models for specific crop head; thus, they lack generalizability to different crop varieties. This paper presents Counting Heads of Cereal Crops Net (CHCNet), which is a unified model designed for counting multiple cereal crop heads by few-shot learning, which effectively reduces labeling costs. Specifically, a refined vision encoder is developed to enhance feature embedding, where a foundation model, namely, the segment anything model (SAM), is employed to emphasize the marked crop heads while mitigating complex background effects. Furthermore, a multiscale feature interaction module is proposed for integrating a similarity metric to facilitate automatic learning of crop-specific features across varying scales, which enhances the ability to describe crop heads of various sizes and shapes. The CHCNet model adopts a 2-stage training procedure. The initial stage focuses on latent feature mining to capture common feature representations of cereal crops. In the subsequent stage, inference is performed without additional training, by extracting domain-specific features of the target crop from selected exemplars to accomplish the counting task. In extensive experiments on 6 diverse crop datasets captured from ground cameras and drones, CHCNet substantially outperformed state-of-the-art counting methods in terms of cross-crop generalization ability, achieving mean absolute errors (MAEs) of 9.96 and 9.38 for maize, 13.94 for sorghum, 7.94 for rice, and 15.62 for mixed crops. A user-friendly interactive demo is available at http://cerealcropnet.com/, where researchers are invited to personally evaluate the proposed CHCNet. The source code for implementing CHCNet is available at https://github.com/Small-flyguy/CHCNet.
{"title":"One to All: Toward a Unified Model for Counting Cereal Crop Heads Based on Few-Shot Learning.","authors":"Qiang Wang, Xijian Fan, Ziqing Zhuang, Tardi Tjahjadi, Shichao Jin, Honghua Huan, Qiaolin Ye","doi":"10.34133/plantphenomics.0271","DOIUrl":"10.34133/plantphenomics.0271","url":null,"abstract":"<p><p>Accurate counting of cereals crops, e.g., maize, rice, sorghum, and wheat, is crucial for estimating grain production and ensuring food security. However, existing methods for counting cereal crops focus predominantly on building models for specific crop head; thus, they lack generalizability to different crop varieties. This paper presents Counting Heads of Cereal Crops Net (CHCNet), which is a unified model designed for counting multiple cereal crop heads by few-shot learning, which effectively reduces labeling costs. Specifically, a refined vision encoder is developed to enhance feature embedding, where a foundation model, namely, the segment anything model (SAM), is employed to emphasize the marked crop heads while mitigating complex background effects. Furthermore, a multiscale feature interaction module is proposed for integrating a similarity metric to facilitate automatic learning of crop-specific features across varying scales, which enhances the ability to describe crop heads of various sizes and shapes. The CHCNet model adopts a 2-stage training procedure. The initial stage focuses on latent feature mining to capture common feature representations of cereal crops. In the subsequent stage, inference is performed without additional training, by extracting domain-specific features of the target crop from selected exemplars to accomplish the counting task. In extensive experiments on 6 diverse crop datasets captured from ground cameras and drones, CHCNet substantially outperformed state-of-the-art counting methods in terms of cross-crop generalization ability, achieving mean absolute errors (MAEs) of 9.96 and 9.38 for maize, 13.94 for sorghum, 7.94 for rice, and 15.62 for mixed crops. A user-friendly interactive demo is available at http://cerealcropnet.com/, where researchers are invited to personally evaluate the proposed CHCNet. The source code for implementing CHCNet is available at https://github.com/Small-flyguy/CHCNet.</p>","PeriodicalId":20318,"journal":{"name":"Plant Phenomics","volume":"6 ","pages":"0271"},"PeriodicalIF":7.6,"publicationDate":"2024-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11639208/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142829642","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Accurate understanding of vertical patterns of canopy structure characteristics and solar radiation distribution patterns of aquatic vegetation is pivotal in formulating a bidirectional reflection model and comprehending the ecological dynamics of wetlands. Further, physiological and biochemical stratified structural properties of aquatic vegetation in wetlands remain unexplored due to more inherent investigation challenges than terrestrial vegetation. This study evaluated the structural characteristics of vegetation communities and the regulation of direct solar radiation variations within the canopy across seasons of Phragmites australis (P. australis) and Typha orientalis (T. orientalis), 2 typical emergent aquatic vegetations (EAVs), based on radiative transfer theory. Observations revealed that physiological and biochemical metrics varied at different growth stages with canopy height, the stratified leaf area index in the middle being higher than at the top and bottom of the P. australis cluster. Moreover, the vertical profiles of direct solar radiation decrease with depth, showing a bowl-shaped and V-shaped curve in the P. australis and T. orientalis clusters, respectively. Interestingly, the sensitivity of layered solar direct radiation transmittance to canopy structural parameters is obviously higher than that of canopy pigments, suggesting considerable potential for estimating layered structural parameters. The transmittance of direct solar radiation decreases with increasing leaf area index at different heights, and stratified transmittance in the cluster can be accurately described by a negative binomial function with a deviation of less than 2%.
准确认识水生植被冠层结构特征的垂直格局和太阳辐射分布格局,对建立湿地双向反射模型和理解湿地生态动态具有重要意义。与陆地植被相比,湿地水生植被的生理生化分层结构特性面临着更多的内在挑战,因此湿地水生植被的生理生化分层结构特性尚未得到深入研究。基于辐射传输理论,研究了典型的水生植被芦苇(Phragmites australis, P. australis)和热带风铃草(Typha orientalis, T. orientalis)的植被群落结构特征和冠层内太阳直接辐射的季节变化规律。在不同生长阶段,生理生化指标随冠层高度的变化而变化,中部分层叶面积指数高于顶部和底部。此外,太阳直接辐射垂直剖面随深度的增加而减小,在南方柽柳和东方柽柳群落中分别呈碗形和v形曲线。有趣的是,层状太阳直接辐射透过率对冠层结构参数的敏感性明显高于冠层色素,表明在估算层状结构参数方面具有很大的潜力。在不同高度,太阳直接辐射的透过率随叶面积指数的增加而降低,用负二项式函数可以准确地描述簇内的分层透过率,偏差小于2%。
{"title":"Seasonal Fluctuations and Vertical Heterogeneity of Biochemical-Structural Parameters in Wetland Emergent Aquatic Vegetation.","authors":"Huaijing Wang, Yunmei Li, Jianguang Wen, Gaolun Wang, Huaiqing Liu, Heng Lyu","doi":"10.34133/plantphenomics.0275","DOIUrl":"10.34133/plantphenomics.0275","url":null,"abstract":"<p><p>Accurate understanding of vertical patterns of canopy structure characteristics and solar radiation distribution patterns of aquatic vegetation is pivotal in formulating a bidirectional reflection model and comprehending the ecological dynamics of wetlands. Further, physiological and biochemical stratified structural properties of aquatic vegetation in wetlands remain unexplored due to more inherent investigation challenges than terrestrial vegetation. This study evaluated the structural characteristics of vegetation communities and the regulation of direct solar radiation variations within the canopy across seasons of <i>Phragmites australis (P. australis)</i> and <i>Typha orientalis (T. orientalis)</i>, 2 typical emergent aquatic vegetations (EAVs), based on radiative transfer theory. Observations revealed that physiological and biochemical metrics varied at different growth stages with canopy height, the stratified leaf area index in the middle being higher than at the top and bottom of the <i>P. australis</i> cluster. Moreover, the vertical profiles of direct solar radiation decrease with depth, showing a bowl-shaped and V-shaped curve in the <i>P. australis</i> and <i>T. orientalis</i> clusters, respectively. Interestingly, the sensitivity of layered solar direct radiation transmittance to canopy structural parameters is obviously higher than that of canopy pigments, suggesting considerable potential for estimating layered structural parameters. The transmittance of direct solar radiation decreases with increasing leaf area index at different heights, and stratified transmittance in the cluster can be accurately described by a negative binomial function with a deviation of less than 2%.</p>","PeriodicalId":20318,"journal":{"name":"Plant Phenomics","volume":"6 ","pages":"0275"},"PeriodicalIF":7.6,"publicationDate":"2024-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11602876/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142751329","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-11-28eCollection Date: 2024-01-01DOI: 10.34133/plantphenomics.0278
Leonardo Volpato, Evan M Wright, Francisco E Gomez
Substantial effort has been made in manually tracking plant maturity and to measure early-stage plant density and crop height in experimental fields. In this study, RGB drone imagery and deep learning (DL) approaches are explored to measure relative maturity (RM), stand count (SC), and plant height (PH), potentially offering higher throughput, accuracy, and cost-effectiveness than traditional methods. A time series of drone images was utilized to estimate dry bean RM employing a hybrid convolutional neural network (CNN) and long short-term memory (LSTM) model. For early-stage SC assessment, Faster RCNN object detection algorithm was evaluated. Flight frequencies, image resolution, and data augmentation techniques were investigated to enhance DL model performance. PH was obtained using a quantile method from digital surface model (DSM) and point cloud (PC) data sources. The CNN-LSTM model showed high accuracy in RM prediction across various conditions, outperforming traditional image preprocessing approaches. The inclusion of growing degree days (GDD) data improved the model's performance under specific environmental stresses. The Faster R-CNN model effectively identified early-stage bean plants, demonstrating superior accuracy over traditional methods and consistency across different flight altitudes. For PH estimation, moderate correlations with ground-truth data were observed across both datasets analyzed. The choice between PC and DSM source data may depend on specific environmental and flight conditions. Overall, the CNN-LSTM and Faster R-CNN models proved more effective than conventional techniques in quantifying RM and SC. The subtraction method proposed for estimating PH without accurate ground elevation data yielded results comparable to the difference-based method. Additionally, the pipeline and open-source software developed hold potential to significantly benefit the phenotyping community.
{"title":"Drone-Based Digital Phenotyping to Evaluating Relative Maturity, Stand Count, and Plant Height in Dry Beans (<i>Phaseolus vulgaris</i> L.).","authors":"Leonardo Volpato, Evan M Wright, Francisco E Gomez","doi":"10.34133/plantphenomics.0278","DOIUrl":"10.34133/plantphenomics.0278","url":null,"abstract":"<p><p>Substantial effort has been made in manually tracking plant maturity and to measure early-stage plant density and crop height in experimental fields. In this study, RGB drone imagery and deep learning (DL) approaches are explored to measure relative maturity (RM), stand count (SC), and plant height (PH), potentially offering higher throughput, accuracy, and cost-effectiveness than traditional methods. A time series of drone images was utilized to estimate dry bean RM employing a hybrid convolutional neural network (CNN) and long short-term memory (LSTM) model. For early-stage SC assessment, Faster RCNN object detection algorithm was evaluated. Flight frequencies, image resolution, and data augmentation techniques were investigated to enhance DL model performance. PH was obtained using a quantile method from digital surface model (DSM) and point cloud (PC) data sources. The CNN-LSTM model showed high accuracy in RM prediction across various conditions, outperforming traditional image preprocessing approaches. The inclusion of growing degree days (GDD) data improved the model's performance under specific environmental stresses. The Faster R-CNN model effectively identified early-stage bean plants, demonstrating superior accuracy over traditional methods and consistency across different flight altitudes. For PH estimation, moderate correlations with ground-truth data were observed across both datasets analyzed. The choice between PC and DSM source data may depend on specific environmental and flight conditions. Overall, the CNN-LSTM and Faster R-CNN models proved more effective than conventional techniques in quantifying RM and SC. The subtraction method proposed for estimating PH without accurate ground elevation data yielded results comparable to the difference-based method. Additionally, the pipeline and open-source software developed hold potential to significantly benefit the phenotyping community.</p>","PeriodicalId":20318,"journal":{"name":"Plant Phenomics","volume":"6 ","pages":"0278"},"PeriodicalIF":7.6,"publicationDate":"2024-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11602537/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142751325","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Plant diseases are a critical driver of the global food crisis. The integration of advanced artificial intelligence technologies can substantially enhance plant disease diagnostics. However, current methods for early and complex detection remain challenging. Employing multimodal technologies, akin to medical artificial intelligence diagnostics that combine diverse data types, may offer a more effective solution. Presently, the reliance on single-modal data predominates in plant disease research, which limits the scope for early and detailed diagnosis. Consequently, developing text modality generation techniques is essential for overcoming the limitations in plant disease recognition. To this end, we propose a method for aligning plant phenotypes with trait descriptions, which diagnoses text by progressively masking disease images. First, for training and validation, we annotate 5,728 disease phenotype images with expert diagnostic text and provide annotated text and trait labels for 210,000 disease images. Then, we propose a PhenoTrait text description model, which consists of global and heterogeneous feature encoders as well as switching-attention decoders, for accurate context-aware output. Next, to generate a more phenotypically appropriate description, we adopt 3 stages of embedding image features into semantic structures, which generate characterizations that preserve trait features. Finally, our experimental results show that our model outperforms several frontier models in multiple trait descriptions, including the larger models GPT-4 and GPT-4o. Our code and dataset are available at https://plantext.samlab.cn/.
{"title":"PlanText: Gradually Masked Guidance to Align Image Phenotypes with Trait Descriptions for Plant Disease Texts.","authors":"Kejun Zhao, Xingcai Wu, Yuanyuan Xiao, Sijun Jiang, Peijia Yu, Yazhou Wang, Qi Wang","doi":"10.34133/plantphenomics.0272","DOIUrl":"10.34133/plantphenomics.0272","url":null,"abstract":"<p><p>Plant diseases are a critical driver of the global food crisis. The integration of advanced artificial intelligence technologies can substantially enhance plant disease diagnostics. However, current methods for early and complex detection remain challenging. Employing multimodal technologies, akin to medical artificial intelligence diagnostics that combine diverse data types, may offer a more effective solution. Presently, the reliance on single-modal data predominates in plant disease research, which limits the scope for early and detailed diagnosis. Consequently, developing text modality generation techniques is essential for overcoming the limitations in plant disease recognition. To this end, we propose a method for aligning plant phenotypes with trait descriptions, which diagnoses text by progressively masking disease images. First, for training and validation, we annotate 5,728 disease phenotype images with expert diagnostic text and provide annotated text and trait labels for 210,000 disease images. Then, we propose a PhenoTrait text description model, which consists of global and heterogeneous feature encoders as well as switching-attention decoders, for accurate context-aware output. Next, to generate a more phenotypically appropriate description, we adopt 3 stages of embedding image features into semantic structures, which generate characterizations that preserve trait features. Finally, our experimental results show that our model outperforms several frontier models in multiple trait descriptions, including the larger models GPT-4 and GPT-4o. Our code and dataset are available at https://plantext.samlab.cn/.</p>","PeriodicalId":20318,"journal":{"name":"Plant Phenomics","volume":"6 ","pages":"0272"},"PeriodicalIF":7.6,"publicationDate":"2024-11-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11589250/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142732084","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}