首页 > 最新文献

Plant Phenomics最新文献

英文 中文
Fitting maximum crown width height of Chinese fir through ensemble learning combined with fine spatial competition. 采用集合学习与精细空间竞争相结合的方法拟合杉木最大冠宽高。
IF 6.4 1区 农林科学 Q1 AGRONOMY Pub Date : 2025-02-28 eCollection Date: 2025-03-01 DOI: 10.1016/j.plaphe.2025.100018
Zeyu Cui, Huaiqing Zhang, Yang Liu, Jing Zhang, Rurao Fu, Kexin Lei

Accurate acquisition of forest spatial competition and tree 3D structural phenotype parameters is crucial for exploring tree-environment interactions. However, due to the occlusion between tree crowns, current UAV-based and ground-based LiDAR struggles to capture complete crown information in dense stands, making parameter extraction challenging such as maximum crown width height (HMCW). This study proposes a canopy spatial relationship-based method for constructing forest spatial structure units and employs five ensemble learning techniques to train 11 machine learning model combinations. By coupling spatial competition with phenotype parameters, the study identifies the optimal fitting model for HMCW of Chinese fir. The results demonstrate that the constructed spatial structure units align closely with existing research while addressing issues of incorrectly selected or omitted neighboring trees. Among the 10,191 trained HMCW models, the Bagging model integrating XGBoost, Random Forest (RF), Support Vector Regression (SVR), Gradient Boosting (GB), and Ridge exhibited the best performance. Compared to the best single model (RF), the Bagging model achieved improved accuracy (R2 ​= ​0.8346, representing a 1.6 ​% improvement; RMSE ​= ​1.4042, reduced by 6.66 ​%; EVS ​= ​0.8389; MAE ​= ​0.9129; MAPE ​= ​0.0508; and MedAE ​= ​0.5076, with corresponding improvements of 1.63 ​%, 1.49 ​%, 0.1 ​%, and 7.06 ​%, respectively). This study provides a viable solution for modeling HMCW in all species with similar structural characteristics and offers a method for extracting other hard-to-measure parameters. The refined spatial structure units better link 3D structural phenotypes with environmental factors. This approach aids in canopy morphology simulation and forest management research.

准确获取森林空间竞争和树木三维结构表型参数对于探索树与环境的相互作用至关重要。然而,由于树冠之间的遮挡,目前基于无人机和地面的激光雷达很难在茂密的林分中捕获完整的树冠信息,这使得最大树冠宽度高度(HMCW)等参数的提取具有挑战性。本文提出了一种基于冠层空间关系的森林空间结构单元构建方法,并采用5种集成学习技术训练了11种机器学习模型组合。通过将空间竞争与表型参数耦合,确定了杉木HMCW的最优拟合模型。结果表明,所构建的空间结构单元与已有的研究结果一致,同时解决了邻树选择错误或遗漏的问题。在10191个HMCW模型中,综合XGBoost、Random Forest (RF)、Support Vector Regression (SVR)、Gradient Boosting (GB)和Ridge的Bagging模型表现最好。与最佳单一模型(RF)相比,Bagging模型的准确率提高了(R2 = 0.8346,提高了1.6%;RMSE = 1.4042,降低了6.66%;EVS = 0.8389; MAE = 0.9129; MAPE = 0.0508; MedAE = 0.5076,分别提高了1.63%、1.49%、0.1%和7.06%)。该研究为具有相似结构特征的所有物种的HMCW建模提供了可行的解决方案,并提供了提取其他难以测量参数的方法。精细化的空间结构单元能更好地将三维结构表型与环境因子联系起来。该方法有助于冠层形态模拟和森林经营研究。
{"title":"Fitting maximum crown width height of Chinese fir through ensemble learning combined with fine spatial competition.","authors":"Zeyu Cui, Huaiqing Zhang, Yang Liu, Jing Zhang, Rurao Fu, Kexin Lei","doi":"10.1016/j.plaphe.2025.100018","DOIUrl":"10.1016/j.plaphe.2025.100018","url":null,"abstract":"<p><p>Accurate acquisition of forest spatial competition and tree 3D structural phenotype parameters is crucial for exploring tree-environment interactions. However, due to the occlusion between tree crowns, current UAV-based and ground-based LiDAR struggles to capture complete crown information in dense stands, making parameter extraction challenging such as maximum crown width height (HMCW). This study proposes a canopy spatial relationship-based method for constructing forest spatial structure units and employs five ensemble learning techniques to train 11 machine learning model combinations. By coupling spatial competition with phenotype parameters, the study identifies the optimal fitting model for HMCW of Chinese fir. The results demonstrate that the constructed spatial structure units align closely with existing research while addressing issues of incorrectly selected or omitted neighboring trees. Among the 10,191 trained HMCW models, the Bagging model integrating XGBoost, Random Forest (RF), Support Vector Regression (SVR), Gradient Boosting (GB), and Ridge exhibited the best performance. Compared to the best single model (RF), the Bagging model achieved improved accuracy (R<sup>2</sup> ​= ​0.8346, representing a 1.6 ​% improvement; RMSE ​= ​1.4042, reduced by 6.66 ​%; EVS ​= ​0.8389; MAE ​= ​0.9129; MAPE ​= ​0.0508; and MedAE ​= ​0.5076, with corresponding improvements of 1.63 ​%, 1.49 ​%, 0.1 ​%, and 7.06 ​%, respectively). This study provides a viable solution for modeling HMCW in all species with similar structural characteristics and offers a method for extracting other hard-to-measure parameters. The refined spatial structure units better link 3D structural phenotypes with environmental factors. This approach aids in canopy morphology simulation and forest management research.</p>","PeriodicalId":20318,"journal":{"name":"Plant Phenomics","volume":"7 1","pages":"100018"},"PeriodicalIF":6.4,"publicationDate":"2025-02-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12709972/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145782544","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
LKNet: Enhancing rice canopy panicle counting accuracy with an optimized point-based framework. LKNet:利用优化的基于点的框架提高水稻冠层穗数计数精度。
IF 6.4 1区 农林科学 Q1 AGRONOMY Pub Date : 2025-02-28 eCollection Date: 2025-03-01 DOI: 10.1016/j.plaphe.2025.100003
Ziqiu Li, Weiyuan Hong, Xiangqian Feng, Aidong Wang, Hengyu Ma, Jinhua Qin, Qin Yao, Danying Wang, Song Chen

Location-based methods for counting rice panicles have often been underestimated, primarily due to their perceived inferior performance when compared to detection-based techniques. However, we argue that the potential of these location-based methods has not been fully realized, largely owing to the limitations of existing model architectures. In response to this challenge, we introduce LKNet, an innovative model developed on the foundation of the location-based framework P2Pnet. To enhance the performance of panicle counting across diverse types and growth stages, we implemented several key strategies. Firstly, we reconstructed the localization loss function as a predictive probability distribution to reduce the influence of manual labeling. Additionally, we dynamically adapted the receptive field to better accommodate different panicle types through the use of large kernel convolutional blocks. We evaluated LKNet on several publicly available counting task datasets and achieved state-of-the-art performance on the Diverse Rice Panicle Detection dataset. Furthermore, we employed a rice panicle dataset collected at an altitude of 7 ​m, which includes various panicle types and growth stages for model training and evaluation. The results showed that LKNet effectively accommodates variations in panicle morphology, with R2 values ranging from 0.903 to 0.989. These findings highlight LKNet's potential to enhance precision in panicle counting in rice breeding programs.

基于位置的水稻穗数方法经常被低估,这主要是由于与基于检测的技术相比,人们认为它们的性能较差。然而,我们认为这些基于位置的方法的潜力尚未完全实现,主要是由于现有模型架构的限制。为了应对这一挑战,我们引入了LKNet,这是一种基于位置的P2Pnet框架开发的创新模型。为了提高不同类型和生育阶段的穗数,我们实施了几个关键策略。首先,我们将定位损失函数重构为预测概率分布,以减少人工标注的影响;此外,我们通过使用大核卷积块动态调整接受野以更好地适应不同的穗型。我们在几个公开可用的计数任务数据集上评估了LKNet,并在多种水稻穗检测数据集上取得了最先进的性能。此外,我们还利用在海拔7 m处采集的水稻穗部数据集,包括不同的穗部类型和生长阶段,进行模型训练和评估。结果表明,LKNet能有效调节穗型变异,R2值在0.903 ~ 0.989之间。这些发现突出了LKNet在水稻育种项目中提高穗数精度的潜力。
{"title":"LKNet: Enhancing rice canopy panicle counting accuracy with an optimized point-based framework.","authors":"Ziqiu Li, Weiyuan Hong, Xiangqian Feng, Aidong Wang, Hengyu Ma, Jinhua Qin, Qin Yao, Danying Wang, Song Chen","doi":"10.1016/j.plaphe.2025.100003","DOIUrl":"10.1016/j.plaphe.2025.100003","url":null,"abstract":"<p><p>Location-based methods for counting rice panicles have often been underestimated, primarily due to their perceived inferior performance when compared to detection-based techniques. However, we argue that the potential of these location-based methods has not been fully realized, largely owing to the limitations of existing model architectures. In response to this challenge, we introduce LKNet, an innovative model developed on the foundation of the location-based framework P2Pnet. To enhance the performance of panicle counting across diverse types and growth stages, we implemented several key strategies. Firstly, we reconstructed the localization loss function as a predictive probability distribution to reduce the influence of manual labeling. Additionally, we dynamically adapted the receptive field to better accommodate different panicle types through the use of large kernel convolutional blocks. We evaluated LKNet on several publicly available counting task datasets and achieved state-of-the-art performance on the Diverse Rice Panicle Detection dataset. Furthermore, we employed a rice panicle dataset collected at an altitude of 7 ​m, which includes various panicle types and growth stages for model training and evaluation. The results showed that LKNet effectively accommodates variations in panicle morphology, with R<sup>2</sup> values ranging from 0.903 to 0.989. These findings highlight LKNet's potential to enhance precision in panicle counting in rice breeding programs.</p>","PeriodicalId":20318,"journal":{"name":"Plant Phenomics","volume":"7 1","pages":"100003"},"PeriodicalIF":6.4,"publicationDate":"2025-02-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12709942/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145781887","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Dynamic maize true leaf area index retrieval with KGCNN and TL and integrated 3D radiative transfer modeling for crop phenotyping. 基于KGCNN和TL的动态玉米真叶面积指数检索及作物表型综合三维辐射转移模型
IF 6.4 1区 农林科学 Q1 AGRONOMY Pub Date : 2025-02-28 eCollection Date: 2025-03-01 DOI: 10.1016/j.plaphe.2025.100004
Dan Zhao, Guijun Yang, Tongyu Xu, Fenghua Yu, Chengjian Zhang, Zhida Cheng, Lipeng Ren, Hao Yang

Accurate and real-time monitoring true leaf area index (LAI) is an essential for assessing crop growth status and predicting yields. Conventional LAI inversion approaches have been constrained by insufficient data representativeness and environmental variability, particularly when applied across interannual variations and different phenological stages. This study presented a novel methodology integrating three-dimensional radiative transfer modeling (3D RTM) with knowledge-guided deep learning to address these limitations. We developed a knowledge-guided convolutional neural network (KGCNN) architecture incorporating 3D canopy structural physics, enhanced through transfer learning (TL) techniques for cross-temporal adaptation. The KGCNN model was initially pre-trained on synthetic datasets generated by the large-scale remote sensing scattering model (LESS), followed by domain-specific fine-tuning using 2021 field measurements, and culminating in cross-year validation with 2022-2023 datasets. Our results demonstrated significant improvements over conventional approaches, with the 3D RTM-based KGCNN achieving superior performance compared to 1D RTM implementations (PROSAIL + CNN + TL). Specially, for the 2022 dataset, the overall R2 increased by 0.27 and RMSE decreased by 2.46; for the 2023 dataset, the overall RMSE decreased by 1.62, compared to the PROSAIL ​+ ​TL method. Our method (3D RTM ​+ ​KGCNN ​+ ​TL) delivered superior LAI retrieval accuracy on the two-year datasets compared to LSTM ​+ ​TL, RNN ​+ ​TL, and 3D RTM ​+ ​RF models. This study also introduced an effective 3D scene modeling strategy that integrates scenarios representing the measured data range with additional synthetic scenes generated through random combinations of structural parameters. By incorporating detailed 3D crop structural information into the KGCNN network and fine-tuning the model with measured data, the approach significantly enhanced the model's adaptability to varying data distributions across different years and growth stages. This approach thus improved both the accuracy and stability of true LAI retrieval.

准确、实时地监测真叶面积指数(LAI)是评估作物生长状况和预测产量的重要手段。传统的LAI反演方法受到数据代表性不足和环境变异性的限制,特别是在年际变化和不同物候阶段应用时。本研究提出了一种将三维辐射传输建模(3D RTM)与知识引导深度学习相结合的新方法来解决这些局限性。我们开发了一个知识引导的卷积神经网络(KGCNN)架构,该架构结合了3D树冠结构物理,并通过迁移学习(TL)技术增强了跨时间适应。KGCNN模型首先在大尺度遥感散射模型(LESS)生成的合成数据集上进行预训练,然后使用2021年的现场测量进行特定领域的微调,最后在2022-2023年的数据集上进行跨年验证。我们的研究结果表明,与传统方法相比,基于3D RTM的KGCNN实现了优于1D RTM实现(PROSAIL + CNN + TL)的性能。特别是对于2022年数据集,总体R2增加0.27,RMSE减少2.46;对于2023年数据集,与PROSAIL + TL方法相比,总体RMSE降低了1.62。与LSTM + TL、RNN + TL和3D RTM + RF模型相比,我们的方法(3D RTM + KGCNN + TL)在两年数据集上提供了更好的LAI检索精度。本研究还介绍了一种有效的3D场景建模策略,该策略将表示测量数据范围的场景与通过随机组合结构参数生成的附加合成场景相结合。通过将详细的三维作物结构信息整合到KGCNN网络中,并用实测数据对模型进行微调,该方法显著增强了模型对不同年份和不同生长阶段数据分布变化的适应性。该方法提高了真LAI检索的准确性和稳定性。
{"title":"Dynamic maize true leaf area index retrieval with KGCNN and TL and integrated 3D radiative transfer modeling for crop phenotyping.","authors":"Dan Zhao, Guijun Yang, Tongyu Xu, Fenghua Yu, Chengjian Zhang, Zhida Cheng, Lipeng Ren, Hao Yang","doi":"10.1016/j.plaphe.2025.100004","DOIUrl":"10.1016/j.plaphe.2025.100004","url":null,"abstract":"<p><p>Accurate and real-time monitoring true leaf area index (LAI) is an essential for assessing crop growth status and predicting yields. Conventional LAI inversion approaches have been constrained by insufficient data representativeness and environmental variability, particularly when applied across interannual variations and different phenological stages. This study presented a novel methodology integrating three-dimensional radiative transfer modeling (3D RTM) with knowledge-guided deep learning to address these limitations. We developed a knowledge-guided convolutional neural network (KGCNN) architecture incorporating 3D canopy structural physics, enhanced through transfer learning (TL) techniques for cross-temporal adaptation. The KGCNN model was initially pre-trained on synthetic datasets generated by the large-scale remote sensing scattering model (LESS), followed by domain-specific fine-tuning using 2021 field measurements, and culminating in cross-year validation with 2022-2023 datasets. Our results demonstrated significant improvements over conventional approaches, with the 3D RTM-based KGCNN achieving superior performance compared to 1D RTM implementations (PROSAIL + CNN + TL). Specially, for the 2022 dataset, the overall R<sup>2</sup> increased by 0.27 and RMSE decreased by 2.46; for the 2023 dataset, the overall RMSE decreased by 1.62, compared to the PROSAIL ​+ ​TL method. Our method (3D RTM ​+ ​KGCNN ​+ ​TL) delivered superior LAI retrieval accuracy on the two-year datasets compared to LSTM ​+ ​TL, RNN ​+ ​TL, and 3D RTM ​+ ​RF models. This study also introduced an effective 3D scene modeling strategy that integrates scenarios representing the measured data range with additional synthetic scenes generated through random combinations of structural parameters. By incorporating detailed 3D crop structural information into the KGCNN network and fine-tuning the model with measured data, the approach significantly enhanced the model's adaptability to varying data distributions across different years and growth stages. This approach thus improved both the accuracy and stability of true LAI retrieval.</p>","PeriodicalId":20318,"journal":{"name":"Plant Phenomics","volume":"7 1","pages":"100004"},"PeriodicalIF":6.4,"publicationDate":"2025-02-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12709990/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145782382","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Identification of phenotypic and transcriptomic signatures underpinning maize crown root systems. 玉米冠根系统的表型和转录组特征鉴定。
IF 6.4 1区 农林科学 Q1 AGRONOMY Pub Date : 2025-02-28 eCollection Date: 2025-03-01 DOI: 10.1016/j.plaphe.2025.100008
Jodi B Callwood, Craig L Cowling, Ella G Townsend, Shikha Malik, Melissa A Draves, Jasper Khor, Jackson P Marshall, Heather Sweers, Justin W Walley, Dior R Kelley

Maize is pivotal in supporting global agriculture and addressing food security challenges. Crop root systems are critical for water uptake and nutrient acquisition, which impacts yield. Quantitative trait phenotyping is essential to understand better the genetic factors underpinning maize root growth and development. Root systems are challenging to phenotype given their below-ground, soil-bound nature. In addition, manual trait annotations of root images are tedious and can lead to inaccuracies and inconsistencies between individuals, resulting in data discrepancies. In this study, we explored juvenile root phenotyping in the presence and absence of auxin treatment, a key phytohormone in root development, using manual curation and gene expression analyses. In addition, we developed an automated phenotyping pipeline for field-grown maize crown roots by leveraging open-source software. By examining a test set of 11 diverse maize genotypes for juvenile-adult root trait correlations and gene expression patterns, an inconsistent correlation was observed, underscoring the developmental plasticity prevalent during maize root morphogenesis. Transcripts involved in hormone signaling and stress responses were among differentially expressed genes in roots from 20 diverse maize genotypes, suggesting many molecular processes may underlie the observed phenotypic variance. In particular, co-expressed gene expression networks associated with module-trait relationships included 1,3-β-glucan, which plays a crucial role in cell wall dynamics. This study furthers our understanding of genotype-phenotype relationships, which is relevant for informing agricultural strategies to improve maize root physiology.

玉米是支持全球农业和应对粮食安全挑战的关键。作物根系对水分吸收和养分获取至关重要,影响产量。数量性状表型对更好地了解玉米根系生长发育的遗传因素至关重要。根系是具有挑战性的表型由于其地下,土壤结合的性质。此外,对根图像进行手动特征注释是繁琐的,并且可能导致个体之间的不准确和不一致,从而导致数据差异。在这项研究中,我们通过人工培养和基因表达分析,探讨了生长素(根系发育的关键植物激素)处理和缺乏生长素处理的幼根表型。此外,我们利用开源软件开发了田间种植玉米冠根的自动化表型管道。通过对11个不同玉米基因型的根性状相关性和基因表达模式进行分析,发现玉米根性状间的相关性不一致,说明玉米根形态发生过程中存在发育可塑性。在20种不同玉米基因型的根系中,参与激素信号和胁迫反应的转录本存在差异表达基因,这表明许多分子过程可能是表型差异的基础。特别是,与模块-性状关系相关的共表达基因表达网络包括1,3-β-葡聚糖,它在细胞壁动力学中起着至关重要的作用。该研究进一步加深了我们对基因型-表型关系的理解,这与改善玉米根系生理的农业策略有关。
{"title":"Identification of phenotypic and transcriptomic signatures underpinning maize crown root systems.","authors":"Jodi B Callwood, Craig L Cowling, Ella G Townsend, Shikha Malik, Melissa A Draves, Jasper Khor, Jackson P Marshall, Heather Sweers, Justin W Walley, Dior R Kelley","doi":"10.1016/j.plaphe.2025.100008","DOIUrl":"10.1016/j.plaphe.2025.100008","url":null,"abstract":"<p><p>Maize is pivotal in supporting global agriculture and addressing food security challenges. Crop root systems are critical for water uptake and nutrient acquisition, which impacts yield. Quantitative trait phenotyping is essential to understand better the genetic factors underpinning maize root growth and development. Root systems are challenging to phenotype given their below-ground, soil-bound nature. In addition, manual trait annotations of root images are tedious and can lead to inaccuracies and inconsistencies between individuals, resulting in data discrepancies. In this study, we explored juvenile root phenotyping in the presence and absence of auxin treatment, a key phytohormone in root development, using manual curation and gene expression analyses. In addition, we developed an automated phenotyping pipeline for field-grown maize crown roots by leveraging open-source software. By examining a test set of 11 diverse maize genotypes for juvenile-adult root trait correlations and gene expression patterns, an inconsistent correlation was observed, underscoring the developmental plasticity prevalent during maize root morphogenesis. Transcripts involved in hormone signaling and stress responses were among differentially expressed genes in roots from 20 diverse maize genotypes, suggesting many molecular processes may underlie the observed phenotypic variance. In particular, co-expressed gene expression networks associated with module-trait relationships included 1,3-β-glucan, which plays a crucial role in cell wall dynamics. This study furthers our understanding of genotype-phenotype relationships, which is relevant for informing agricultural strategies to improve maize root physiology.</p>","PeriodicalId":20318,"journal":{"name":"Plant Phenomics","volume":"7 1","pages":"100008"},"PeriodicalIF":6.4,"publicationDate":"2025-02-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12709904/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145782620","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Genetic resolution of multi-level plant height in common wheat using the 3D canopy model from ultra-low altitude unmanned aerial vehicle imagery. 基于超低空无人机影像三维冠层模型的普通小麦多层株高遗传分辨率研究
IF 6.4 1区 农林科学 Q1 AGRONOMY Pub Date : 2025-02-27 eCollection Date: 2025-03-01 DOI: 10.1016/j.plaphe.2025.100017
Shuaipeng Fei, Yidan Jia, Lei Li, Shunfu Xiao, Jie Song, Shurong Yang, Duoxia Wang, Guangyao Sun, Bohan Zhang, Keyi Wang, Junjie Ma, Jindong Liu, Yonggui Xiao, Yuntao Ma

In quantitative genomic analysis of wheat plant height (PH), the average height of a few representative plants is typically used to represent the PH of the entire plot, which overlooks the variation in height among other plants. Extracting different height quantiles from canopy point clouds can address this limitation. For this purpose, low-cost UAV cross-circling oblique (CCO) imaging, combined with structure-from-motion (SfM) and multi-view stereopsis (MVS), was employed to generate precise canopy point clouds for 262 F5 recombinant inbred lines (Zhongmai 578 ​× ​Jimai 22) across seven environments. Multi-level 3D-PH measurements were extracted from six height quantiles, revealing a strong correlation (mean r ​= ​0.95) between 3D-PH and field-measured PH (FM-PH) across environments. The 90 ​% and 92 ​% height quantiles showed the closest agreement with FM-PH compared to other quantiles. Eleven stable quantitative trait loci (QTLs) associated with multi-level 3D-PH were identified using a 50K single nucleotide polymorphism array. Among these, QPhzj.caas-3A.2 (detected by 3D-PH) and QPhzj.caas-7A.1 (detected by both FM-PH and 3D-PH) represented potential novel loci. KASP markers for these QTLs were developed and validated. Furthermore, within the intervals of QPhzj.caas-5A and QPhzj.caas-3B (both were detected by 3D-PH), two candidate genes associated with PH regulation were identified: TaGL3-5A and Rht5, respectively. Corresponding KASP markers for these genes were also developed and validated. This study highlighted the advantages of 3D model and multi-level 3D-PH in elucidating the genetic basis of crop height, and provided a precise and objective basis for advancing wheat breeding programs.

在小麦株高(PH)的基因组定量分析中,通常采用少数有代表性植物的平均株高来代表整个地块的PH,而忽略了其他植物株高之间的差异。从冠层点云中提取不同的高度分位数可以解决这一限制。为此,采用低成本无人机交叉盘旋斜向(CCO)成像技术,结合运动结构(SfM)和多视角立体视觉(MVS)技术,对262个F5重组自交系(中麦578 ×吉麦22)在7个环境下生成精确的冠层点云。从6个高度分位数中提取了多层次的3D-PH测量值,揭示了3D-PH与不同环境下的现场测量PH (FM-PH)之间的强相关性(平均r = 0.95)。与其他分位数相比,90%和92%的高度分位数与FM-PH最接近。使用50K单核苷酸多态性阵列鉴定了11个与多层次3D-PH相关的稳定数量性状位点(qtl)。其中,QPhzj.caas-3A.2 (3D-PH检测)和QPhzj.caas-7A.1 (FM-PH和3D-PH检测)代表了潜在的新位点。建立并验证了这些qtl的KASP标记。此外,在QPhzj.caas-5A和QPhzj.caas-3B(均通过3D-PH检测)区间内,鉴定出两个与PH调节相关的候选基因:TaGL3-5A和Rht5。开发并验证了这些基因对应的KASP标记。本研究突出了3D模型和多层次3D- ph在阐明作物高度遗传基础方面的优势,为推进小麦育种计划提供了精确、客观的依据。
{"title":"Genetic resolution of multi-level plant height in common wheat using the 3D canopy model from ultra-low altitude unmanned aerial vehicle imagery.","authors":"Shuaipeng Fei, Yidan Jia, Lei Li, Shunfu Xiao, Jie Song, Shurong Yang, Duoxia Wang, Guangyao Sun, Bohan Zhang, Keyi Wang, Junjie Ma, Jindong Liu, Yonggui Xiao, Yuntao Ma","doi":"10.1016/j.plaphe.2025.100017","DOIUrl":"10.1016/j.plaphe.2025.100017","url":null,"abstract":"<p><p>In quantitative genomic analysis of wheat plant height (PH), the average height of a few representative plants is typically used to represent the PH of the entire plot, which overlooks the variation in height among other plants. Extracting different height quantiles from canopy point clouds can address this limitation. For this purpose, low-cost UAV cross-circling oblique (CCO) imaging, combined with structure-from-motion (SfM) and multi-view stereopsis (MVS), was employed to generate precise canopy point clouds for 262 F5 recombinant inbred lines (Zhongmai 578 ​× ​Jimai 22) across seven environments. Multi-level 3D-PH measurements were extracted from six height quantiles, revealing a strong correlation (mean <i>r</i> ​= ​0.95) between 3D-PH and field-measured PH (FM-PH) across environments. The 90 ​% and 92 ​% height quantiles showed the closest agreement with FM-PH compared to other quantiles. Eleven stable quantitative trait loci (QTLs) associated with multi-level 3D-PH were identified using a 50K single nucleotide polymorphism array. Among these, <i>QPhzj.caas-3A.2</i> (detected by 3D-PH) and <i>QPhzj.caas-7A.1</i> (detected by both FM-PH and 3D-PH) represented potential novel loci. KASP markers for these QTLs were developed and validated. Furthermore, within the intervals of <i>QPhzj.caas-5A</i> and <i>QPhzj.caas-3B</i> (both were detected by 3D-PH), two candidate genes associated with PH regulation were identified: <i>TaGL3-5A</i> and <i>Rht5</i>, respectively. Corresponding KASP markers for these genes were also developed and validated. This study highlighted the advantages of 3D model and multi-level 3D-PH in elucidating the genetic basis of crop height, and provided a precise and objective basis for advancing wheat breeding programs.</p>","PeriodicalId":20318,"journal":{"name":"Plant Phenomics","volume":"7 1","pages":"100017"},"PeriodicalIF":6.4,"publicationDate":"2025-02-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12710021/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145782487","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The blessing of Depth Anything: An almost unsupervised approach to crop segmentation with depth-informed pseudo labeling. 任何深度的祝福:一种几乎无监督的方法,用深度知情的伪标签进行裁剪分割。
IF 6.4 1区 农林科学 Q1 AGRONOMY Pub Date : 2025-02-27 eCollection Date: 2025-03-01 DOI: 10.1016/j.plaphe.2025.100005
Songliang Cao, Binghui Xu, Wei Zhou, Letian Zhou, Jiafei Zhang, Yuhui Zheng, Weijuan Hu, Zhiguo Han, Hao Lu

We present Depth-Informed Crop Segmentation (DepthCropSeg), an almost unsupervised crop segmentation approach without manual pixel-level annotations. Crop segmentation is a fundamental vision task in agriculture, which benefits a number of downstream applications such as crop growth monitoring and yield estimation. Over the past decade, image-based crop segmentation approaches have shifted from classic color-based paradigms to recent deep learning-based ones. The latter, however, rely heavily on large amounts of data with high-quality manual annotation such that considerable human labor and time are spent. In this work, we leverage Depth Anything V2, a vision foundation model, to produce high-quality pseudo crop masks for training segmentation models. We compile a dataset of 17,199 images from six public plant segmentation sources, generating pseudo masks from depth maps after normalization and thresholding. After a coarse-to-fine manual screening, 1378 images with reliable masks are selected. We compare four semantic segmentation models and enhance the top-performing one with depth-informed two-stage self-training and depth-informed post-processing. To evaluate the feasibility and robustness of DepthCropSeg, we benchmark the segmentation performance on 10 public crop segmentation testing sets and a self-collect dataset covering in-field, laboratory, and unmanned aerial vehicle (UAV) scenarios. Experimental results show that our DepthCropSeg approach can achieve crop segmentation performance comparable to the fully supervised model trained with manually annotated data (86.91 vs. 87.10). For the first time, we demonstrate almost unsupervised, close-to-full-supervision crop segmentation successfully.

我们提出了深度知情裁剪分割(DepthCropSeg),这是一种几乎无监督的裁剪分割方法,无需手动像素级注释。作物分割是农业领域的一项基本视觉任务,对作物生长监测和产量估算等下游应用具有重要意义。在过去的十年中,基于图像的裁剪分割方法已经从经典的基于颜色的范例转变为最近的基于深度学习的范例。然而,后者严重依赖大量具有高质量手工注释的数据,因此需要花费大量人力和时间。在这项工作中,我们利用深度任何V2,一个视觉基础模型,为训练分割模型产生高质量的伪裁剪蒙版。我们编译了来自6个公共植物分割源的17,199幅图像的数据集,经过归一化和阈值处理后,从深度图中生成伪掩码。经过从粗到精的人工筛选,选出了1378张具有可靠掩模的图像。我们比较了四种语义分割模型,并通过深度知情的两阶段自我训练和深度知情的后处理来增强表现最好的语义分割模型。为了评估DepthCropSeg的可行性和鲁棒性,我们在10个公共作物分割测试集和一个涵盖田间、实验室和无人机场景的自收集数据集上对分割性能进行了基准测试。实验结果表明,我们的DepthCropSeg方法可以达到与人工注释数据训练的完全监督模型相当的作物分割性能(86.91比87.10)。我们第一次成功地展示了几乎无监督的、接近全监督的作物分割。
{"title":"The blessing of Depth Anything: An almost unsupervised approach to crop segmentation with depth-informed pseudo labeling.","authors":"Songliang Cao, Binghui Xu, Wei Zhou, Letian Zhou, Jiafei Zhang, Yuhui Zheng, Weijuan Hu, Zhiguo Han, Hao Lu","doi":"10.1016/j.plaphe.2025.100005","DOIUrl":"10.1016/j.plaphe.2025.100005","url":null,"abstract":"<p><p>We present Depth-Informed Crop Segmentation (DepthCropSeg), an almost unsupervised crop segmentation approach without manual pixel-level annotations. Crop segmentation is a fundamental vision task in agriculture, which benefits a number of downstream applications such as crop growth monitoring and yield estimation. Over the past decade, image-based crop segmentation approaches have shifted from classic color-based paradigms to recent deep learning-based ones. The latter, however, rely heavily on large amounts of data with high-quality manual annotation such that considerable human labor and time are spent. In this work, we leverage Depth Anything V2, a vision foundation model, to produce high-quality pseudo crop masks for training segmentation models. We compile a dataset of 17,199 images from six public plant segmentation sources, generating pseudo masks from depth maps after normalization and thresholding. After a coarse-to-fine manual screening, 1378 images with reliable masks are selected. We compare four semantic segmentation models and enhance the top-performing one with depth-informed two-stage self-training and depth-informed post-processing. To evaluate the feasibility and robustness of DepthCropSeg, we benchmark the segmentation performance on 10 public crop segmentation testing sets and a self-collect dataset covering in-field, laboratory, and unmanned aerial vehicle (UAV) scenarios. Experimental results show that our DepthCropSeg approach can achieve crop segmentation performance comparable to the fully supervised model trained with manually annotated data (86.91 vs. 87.10). For the first time, we demonstrate almost unsupervised, close-to-full-supervision crop segmentation successfully.</p>","PeriodicalId":20318,"journal":{"name":"Plant Phenomics","volume":"7 1","pages":"100005"},"PeriodicalIF":6.4,"publicationDate":"2025-02-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12709960/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145782405","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
In vivo tracing the trajectory of cell lignification in pear fruit during development using click chemistry imaging. 利用点击化学成像技术在体内追踪梨果实发育过程中细胞木质化的轨迹。
IF 6.4 1区 农林科学 Q1 AGRONOMY Pub Date : 2025-02-25 eCollection Date: 2025-03-01 DOI: 10.1016/j.plaphe.2025.100010
Nan Zhu, Guoming Wang, Kaijie Qi, Zhihua Xie, Shutian Tao, Shaoling Zhang

Pear fruit typically contains abundant highly lignified cells, known as stone cells, which have a negative impact on the fruit's edibility and processing quality. Despite extensive physiological and molecular research, there remains a limited understanding of the precise spatiotemporal aspects of lignification in flesh cells during pear development, particularly regarding the initiation of lignification and expansion of stone cell clusters. Here, an emerging bioorthogonal chemistry-based imaging technique was employed to in vivo visualize cell lignification dynamics in developing pear fruit. Specific identification of active sites undergoing lignification revealed that initial lignification of flesh cells occurred at 10 days after full bloom (DAFB), resulting in the formation of primordial stone cells (PSCs). These PSCs exhibited a random distribution and showed significantly larger diameter and area compared to normal parenchyma cells. Subsequently, PSCs developed pit canals and initiated lignification process in their neighboring cells at 15 DAFB. A cascading effect in the formation of stone cell aggregations was visualized by tracing of the lignification trajectory. This expansion process exhibited a domino effect, whereby lignification progressively spread from one cell to the next, creating a cascading pattern of stone cell formation. Finally, a cellular developmental model was proposed for stone cell formation. This study presented a procedure for applying the cutting-edge technology, click chemistry imaging, to get insights into practical scientific questions. The findings elucidated the spatiotemporal dynamics of active lignification sites in pear fruit at the cellular level, thereby enhancing our understanding of the initiation and aggregation processes in stone cell formation.

梨果实通常含有大量高度木质化的细胞,即石细胞,这对果实的可食性和加工质量有负面影响。尽管有广泛的生理和分子研究,但对梨发育过程中果肉细胞木质素化的精确时空方面的理解仍然有限,特别是关于木质素化的开始和石细胞团的扩张。本研究采用一种新兴的基于生物正交化学的成像技术来观察梨果实发育过程中细胞木质化动力学。木质化活性位点的特异性鉴定表明,肉细胞的初始木质化发生在完全开花(DAFB)后10天,导致原始石细胞(PSCs)的形成。这些PSCs呈随机分布,直径和面积明显大于正常薄壁细胞。随后,PSCs在15 DAFB时在邻近细胞中形成坑管并启动木质素化过程。通过跟踪木质素化轨迹,可以看到石细胞聚集形成的级联效应。这种扩张过程表现出多米诺骨牌效应,木质化逐渐从一个细胞扩散到另一个细胞,形成石细胞形成的级联模式。最后,提出了石细胞形成的细胞发育模型。本研究提出了一个应用尖端技术的程序,点击化学成像,以获得对实际科学问题的见解。这一发现阐明了梨果实中活性木质素位点在细胞水平上的时空动态,从而加深了我们对石细胞形成的起始和聚集过程的理解。
{"title":"<i>In vivo</i> tracing the trajectory of cell lignification in pear fruit during development using click chemistry imaging.","authors":"Nan Zhu, Guoming Wang, Kaijie Qi, Zhihua Xie, Shutian Tao, Shaoling Zhang","doi":"10.1016/j.plaphe.2025.100010","DOIUrl":"10.1016/j.plaphe.2025.100010","url":null,"abstract":"<p><p>Pear fruit typically contains abundant highly lignified cells, known as stone cells, which have a negative impact on the fruit's edibility and processing quality. Despite extensive physiological and molecular research, there remains a limited understanding of the precise spatiotemporal aspects of lignification in flesh cells during pear development, particularly regarding the initiation of lignification and expansion of stone cell clusters. Here, an emerging bioorthogonal chemistry-based imaging technique was employed to <i>in vivo</i> visualize cell lignification dynamics in developing pear fruit. Specific identification of active sites undergoing lignification revealed that initial lignification of flesh cells occurred at 10 days after full bloom (DAFB), resulting in the formation of primordial stone cells (PSCs). These PSCs exhibited a random distribution and showed significantly larger diameter and area compared to normal parenchyma cells. Subsequently, PSCs developed pit canals and initiated lignification process in their neighboring cells at 15 DAFB. A cascading effect in the formation of stone cell aggregations was visualized by tracing of the lignification trajectory. This expansion process exhibited a domino effect, whereby lignification progressively spread from one cell to the next, creating a cascading pattern of stone cell formation. Finally, a cellular developmental model was proposed for stone cell formation. This study presented a procedure for applying the cutting-edge technology, click chemistry imaging, to get insights into practical scientific questions. The findings elucidated the spatiotemporal dynamics of active lignification sites in pear fruit at the cellular level, thereby enhancing our understanding of the initiation and aggregation processes in stone cell formation.</p>","PeriodicalId":20318,"journal":{"name":"Plant Phenomics","volume":"7 1","pages":"100010"},"PeriodicalIF":6.4,"publicationDate":"2025-02-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12709903/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145782268","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Integrating crop models, single nucleotide polymorphism, and climatic indices to develop genotype-environment interaction model: A case study on rice flowering time. 整合作物模型、单核苷酸多态性和气候指标建立基因型-环境互作模型——以水稻花期为例
IF 6.4 1区 农林科学 Q1 AGRONOMY Pub Date : 2025-02-25 eCollection Date: 2025-03-01 DOI: 10.1016/j.plaphe.2025.100007
Jinhan Zhang, Shaoyuan Zhang, Yubin Yang, Wenliang Yan, Xiaomao Lin, Lloyd T Wilson, Bing Liu, Leilei Liu, Liujun Xiao, Yan Zhu, Weixing Cao, Liang Tang

Genotype-environment interaction (G ​× ​E) models have potential in digital breeding and crop phenotype prediction. Using genotype-specific parameters (GSPs) as a bridge, crop growth models can capture G ​× ​E and simulate plant growth and development processes. In this study, a dataset containing multi-environmental planting and flowering data for 169 genotypes, each with 700K single nucleotide polymorphism (SNP) markers was used. Three rice growth models (ORYZA, CERES-Rice, and RiceGrow), SNPs, and climatic indices were integrated for flowering time prediction. Significant associations between GSPs and quantitative trait nucleotides (QTNs) were investigated using genome-wide association study (GWAS) methods. Several GSPs were associated with previously reported rice flowering genes, including DTH2, DTH3 and OsCOL15, demonstrating the genetic interpretability of the models. The rice models driven by SNPs-based GSPs showed a decrease in goodness of fit as reflected by increased root mean square errors (RMSE), compared to the traditional model calibration. The predictions of crop model were further modified using the machine learning (ML) methods and climate indicators. The accuracy of the modified predictions were comparable to what was achieved using the traditional calibration approach. In addition, the Multi-model ensemble (MME) was comparable to that of the best individual model. Implications of our findings can potentially facilitate molecular breeding and phenotypic prediction of rice.

基因型-环境互作(gxe)模型在数字育种和作物表型预测中具有潜在的应用价值。利用基因型特异性参数(GSPs)作为桥梁,作物生长模型可以捕捉G × E并模拟植物生长发育过程。本研究使用了包含169个基因型的多环境种植和开花数据的数据集,每个基因型都有700K单核苷酸多态性(SNP)标记。利用ORYZA、CERES-Rice和rice grow 3种水稻生长模型、snp和气候指标对水稻开花期进行预测。采用全基因组关联研究(GWAS)方法研究了GSPs与数量性状核苷酸(QTNs)之间的显著相关性。一些GSPs与先前报道的水稻开花基因相关,包括DTH2、DTH3和OsCOL15,证明了这些模型的遗传可解释性。与传统模型校准相比,基于snp的GSPs驱动的水稻模型的拟合优度下降,这反映在均方根误差(RMSE)的增加上。利用机器学习方法和气候指标对作物模型的预测结果进行了进一步修正。修正后的预测精度与使用传统校准方法所取得的精度相当。此外,多模型集合(MME)与最佳个体模型相当。本研究结果对水稻分子育种和表型预测具有潜在的指导意义。
{"title":"Integrating crop models, single nucleotide polymorphism, and climatic indices to develop genotype-environment interaction model: A case study on rice flowering time.","authors":"Jinhan Zhang, Shaoyuan Zhang, Yubin Yang, Wenliang Yan, Xiaomao Lin, Lloyd T Wilson, Bing Liu, Leilei Liu, Liujun Xiao, Yan Zhu, Weixing Cao, Liang Tang","doi":"10.1016/j.plaphe.2025.100007","DOIUrl":"10.1016/j.plaphe.2025.100007","url":null,"abstract":"<p><p>Genotype-environment interaction (G ​× ​E) models have potential in digital breeding and crop phenotype prediction. Using genotype-specific parameters (GSPs) as a bridge, crop growth models can capture G ​× ​E and simulate plant growth and development processes. In this study, a dataset containing multi-environmental planting and flowering data for 169 genotypes, each with 700K single nucleotide polymorphism (SNP) markers was used. Three rice growth models (ORYZA, CERES-Rice, and RiceGrow), SNPs, and climatic indices were integrated for flowering time prediction. Significant associations between GSPs and quantitative trait nucleotides (QTNs) were investigated using genome-wide association study (GWAS) methods. Several GSPs were associated with previously reported rice flowering genes, including <i>DTH2</i>, <i>DTH3</i> and <i>OsCOL15</i>, demonstrating the genetic interpretability of the models. The rice models driven by SNPs-based GSPs showed a decrease in goodness of fit as reflected by increased root mean square errors (RMSE), compared to the traditional model calibration. The predictions of crop model were further modified using the machine learning (ML) methods and climate indicators. The accuracy of the modified predictions were comparable to what was achieved using the traditional calibration approach. In addition, the Multi-model ensemble (MME) was comparable to that of the best individual model. Implications of our findings can potentially facilitate molecular breeding and phenotypic prediction of rice.</p>","PeriodicalId":20318,"journal":{"name":"Plant Phenomics","volume":"7 1","pages":"100007"},"PeriodicalIF":6.4,"publicationDate":"2025-02-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12709907/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145782598","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multi-granularity alignment for crop diseases detection. 作物病害检测的多粒度对齐。
IF 6.4 1区 农林科学 Q1 AGRONOMY Pub Date : 2025-02-24 eCollection Date: 2025-03-01 DOI: 10.1016/j.plaphe.2025.100001
Guinan Guo, Fang Zhou, Qingyang Wu, Dongran Zhai

The threat of crop diseases can lead to reduced yields and significantly hinder progress towards achieving the sustainable development goal of "zero hunger." In the process of detecting crop diseases, variations in data collection conditions can lead to significant differences in the spatial distribution features of training and testing data. Models trained on specific datasets often perform poorly when applied to detect crop diseases in new datasets, significantly affecting the performance of cross-domain object detection tasks. To address the challenges of cross-domain crop disease object detection, this paper proposes a Multi-Granularity Alignment (MGA) domain adaptation framework that is compatible with other object detectors and is generalizable. This approach integrates the multi-granularity alignment and omni-scale gated fusion domain adaptation components into an enhanced object detector, aiming to align features between the source and target domains and reduce their disparities. MGA conducts scale-aware convolutional aggregation on the feature maps of the object detector, and it utilizes three different levels of discriminators-category, instance, and pixel-to identify the domain source, aligning features across different domains from a granularity-dependent perspective, thereby achieving cross-domain object detection. The experimental results demonstrate that MGA achieves the highest mAP in various datasets collected from different regions, environments, and styles, with scores of 47.9 ​% (dataset: PVi → CDi), 48.3 ​% (dataset: PDc → PVi), and 49.2 ​% (dataset: Data with style transfer → CDi). This performance significantly surpasses other object detection technologies. When integrated into Faster R-CNN, MGA also achieves a remarkable mAP of 44.7 ​% on the CDi → Data w/style transfer dataset, demonstrating robust generalization capabilities.

作物病害的威胁可能导致减产,并严重阻碍在实现“零饥饿”可持续发展目标方面取得进展。在作物病害检测过程中,数据采集条件的变化会导致训练数据和测试数据的空间分布特征存在显著差异。在特定数据集上训练的模型在用于检测新数据集中的作物病害时往往表现不佳,严重影响了跨域目标检测任务的性能。为了解决作物病害目标跨域检测的难题,本文提出了一种多粒度对齐(MGA)领域自适应框架,该框架与其他目标检测器兼容且具有通用性。该方法将多粒度对齐和全尺度门控融合域自适应组件集成到一个增强的目标检测器中,旨在对齐源域和目标域之间的特征并减小它们之间的差异。MGA对目标检测器的特征图进行尺度感知卷积聚合,利用类别、实例和像素三种不同层次的鉴别器识别域源,从依赖于粒度的角度对不同域的特征进行对齐,从而实现跨域目标检测。实验结果表明,MGA在不同地区、不同环境和不同风格的数据集上的mAP值最高,分别为47.9%(数据集:PVi→CDi)、48.3%(数据集:PDc→PVi)和49.2%(数据集:带有风格迁移的数据集→CDi)。这一性能明显优于其他目标检测技术。当集成到Faster R-CNN中时,MGA在CDi→Data /style transfer数据集上也实现了44.7%的显著mAP,显示出强大的泛化能力。
{"title":"Multi-granularity alignment for crop diseases detection.","authors":"Guinan Guo, Fang Zhou, Qingyang Wu, Dongran Zhai","doi":"10.1016/j.plaphe.2025.100001","DOIUrl":"10.1016/j.plaphe.2025.100001","url":null,"abstract":"<p><p>The threat of crop diseases can lead to reduced yields and significantly hinder progress towards achieving the sustainable development goal of \"zero hunger.\" In the process of detecting crop diseases, variations in data collection conditions can lead to significant differences in the spatial distribution features of training and testing data. Models trained on specific datasets often perform poorly when applied to detect crop diseases in new datasets, significantly affecting the performance of cross-domain object detection tasks. To address the challenges of cross-domain crop disease object detection, this paper proposes a Multi-Granularity Alignment (MGA) domain adaptation framework that is compatible with other object detectors and is generalizable. This approach integrates the multi-granularity alignment and omni-scale gated fusion domain adaptation components into an enhanced object detector, aiming to align features between the source and target domains and reduce their disparities. MGA conducts scale-aware convolutional aggregation on the feature maps of the object detector, and it utilizes three different levels of discriminators-category, instance, and pixel-to identify the domain source, aligning features across different domains from a granularity-dependent perspective, thereby achieving cross-domain object detection. The experimental results demonstrate that MGA achieves the highest mAP in various datasets collected from different regions, environments, and styles, with scores of 47.9 ​% (dataset: PVi → CDi), 48.3 ​% (dataset: PDc → PVi), and 49.2 ​% (dataset: Data with style transfer → CDi). This performance significantly surpasses other object detection technologies. When integrated into Faster R-CNN, MGA also achieves a remarkable mAP of 44.7 ​% on the CDi → Data w/style transfer dataset, demonstrating robust generalization capabilities.</p>","PeriodicalId":20318,"journal":{"name":"Plant Phenomics","volume":"7 1","pages":"100001"},"PeriodicalIF":6.4,"publicationDate":"2025-02-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12709998/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145781914","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
3D-NOD: 3D new organ detection in plant growth by a spatiotemporal point cloud deep segmentation framework. 3D- nod:基于时空点云深度分割框架的植物生长三维新器官检测。
IF 6.4 1区 农林科学 Q1 AGRONOMY Pub Date : 2025-02-22 eCollection Date: 2025-03-01 DOI: 10.1016/j.plaphe.2025.100002
Dawei Li, Foysal Ahmed, Zhanjiang Wang

Automatic plant growth monitoring is an important task in modern agriculture for maintaining high crop yield and boosting the breeding procedure. The advancement of 3D sensing technology has made 3D point clouds to be a better data form on presenting plant growth than images, as the new organs are easier identified in 3D space and the occluded organs in 2D can also be conveniently separated in 3D. Despite the attractive characteristics, analysis on 3D data can be quite challenging. We present 3D-NOD, a framework to detect new organs from time-series 3D plant data by spatiotemporal point cloud deep semantic segmentation. The design of 3D-NOD framework drew inspiration from how a well-experienced human utilizes spatiotemporal information to identify growing buds from a plant at two different growth stages. In the training phase, by introducing the Backward & Forward Labeling, the Registration & Mix-up, and the Humanoid Data Augmentation step, our backbone network can be trained to recognize growth events with organ correlation from both temporal and spatial domains. In testing, 3D-NOD has shown better sensitivity at segmenting new organs against the conventional way of using a network to conduct direct semantic segmentation. On a time-series dataset containing multiple species, Our method reached a mean F1-measure at 88.13 ​% and a mean IoU at 80.68 ​% on detecting both new and old organs with the DGCNN backbone.

植物生长自动监测是现代农业中保持作物高产、促进育种进程的重要任务。三维传感技术的进步使得三维点云成为比图像更好的呈现植物生长的数据形式,因为在三维空间中更容易识别新的器官,在二维中被遮挡的器官也可以在三维中方便地分离。尽管具有吸引人的特点,但对3D数据的分析可能相当具有挑战性。我们提出了3D- nod框架,通过时空点云深度语义分割从时序三维植物数据中检测新器官。3D-NOD框架的设计灵感来自于一个经验丰富的人如何利用时空信息来识别植物在两个不同生长阶段的发芽。在训练阶段,通过引入向后和向前标记、注册和混合以及人形数据增强步骤,我们的骨干网络可以从时间和空间两个领域训练识别具有器官相关性的生长事件。在测试中,3D-NOD在分割新器官方面表现出比使用网络进行直接语义分割的传统方法更好的灵敏度。在包含多个物种的时间序列数据集上,我们的方法在检测DGCNN主干的新器官和旧器官时的平均f1测量值为88.13%,平均IoU为80.68%。
{"title":"3D-NOD: 3D new organ detection in plant growth by a spatiotemporal point cloud deep segmentation framework.","authors":"Dawei Li, Foysal Ahmed, Zhanjiang Wang","doi":"10.1016/j.plaphe.2025.100002","DOIUrl":"10.1016/j.plaphe.2025.100002","url":null,"abstract":"<p><p>Automatic plant growth monitoring is an important task in modern agriculture for maintaining high crop yield and boosting the breeding procedure. The advancement of 3D sensing technology has made 3D point clouds to be a better data form on presenting plant growth than images, as the new organs are easier identified in 3D space and the occluded organs in 2D can also be conveniently separated in 3D. Despite the attractive characteristics, analysis on 3D data can be quite challenging. We present 3D-NOD, a framework to detect new organs from time-series 3D plant data by spatiotemporal point cloud deep semantic segmentation. The design of 3D-NOD framework drew inspiration from how a well-experienced human utilizes spatiotemporal information to identify growing buds from a plant at two different growth stages. In the training phase, by introducing the Backward & Forward Labeling, the Registration & Mix-up, and the Humanoid Data Augmentation step, our backbone network can be trained to recognize growth events with organ correlation from both temporal and spatial domains. In testing, 3D-NOD has shown better sensitivity at segmenting new organs against the conventional way of using a network to conduct direct semantic segmentation. On a time-series dataset containing multiple species, Our method reached a mean F1-measure at 88.13 ​% and a mean IoU at 80.68 ​% on detecting both new and old organs with the DGCNN backbone.</p>","PeriodicalId":20318,"journal":{"name":"Plant Phenomics","volume":"7 1","pages":"100002"},"PeriodicalIF":6.4,"publicationDate":"2025-02-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12709973/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145782419","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Plant Phenomics
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1