首页 > 最新文献

Plant Phenomics最新文献

英文 中文
SMICGS: A novel snapshot multispectral imaging sensor for quantitative monitoring of crop growth. SMICGS:用于作物生长定量监测的新型快照多光谱成像传感器。
IF 6.4 1区 农林科学 Q1 AGRONOMY Pub Date : 2025-05-20 eCollection Date: 2025-06-01 DOI: 10.1016/j.plaphe.2025.100056
Yongxian Wang, Mingchao Shao, Jiacheng Wang, Jingwei An, Jianshuang Wu, Xia Yao, Xiaohu Zhang, Chongya Jiang, Tao Cheng, Yongchao Tian, Weixing Cao, Dong Zhou, Yan Zhu

Unmanned aerial vehicle (UAV)-based multispectral imaging is one of the most widely used technologies for rapid crop monitoring, essential for crop-growth management. However, the technology's complex optical structure and difficulty in interpreting real-time crop-growth information seriously restrict its application. This paper presents a newly designed UAV-based snapshot multispectral imaging crop-growth sensor (SMICGS) aimed at simplifying the optical structure and realizing the online interpretation of crop spectral information. Mosaic filters based on the special spectral characteristics of crops were designed to achieve multiband co-optical imaging. A spectral crosstalk correction method based on the pixel response characteristics of SMICGS was proposed, and a processing system based on the coupling of sensor information and crop-growth monitoring models was developed to realize real-time online processing of crop spectral information. Field experiments showed that the vegetation indices obtained by SMICGS combined with the machine learning algorithm random forest (RF) achieved better results in predicting leaf area index (LAI) and above-ground biomass (AGB) for wheat and rice. For wheat, the R2 and root mean square error (RMSE) values for the LAI and AGB prediction models were 0.81 and 0.85, and 0.682 and 1.127 ​t/ha, respectively. For rice, the R2 and RMSE values for the LAI and AGB prediction models were 0.89 and 0.93, and 0.818 and 0.866 ​t/ha, respectively. Overall, SMICGS provides a reliable foundational tool for real-time, non-destructive monitoring of field crop growth information, offering significant potential for the precise management of agricultural production.

基于无人机(UAV)的多光谱成像技术是应用最广泛的作物快速监测技术之一,对作物生长管理至关重要。然而,该技术复杂的光学结构和难以解释实时作物生长信息严重限制了其应用。本文提出了一种基于无人机的快照多光谱成像作物生长传感器(SMICGS),旨在简化其光学结构,实现作物光谱信息的在线解译。基于农作物特殊的光谱特性,设计了马赛克滤光片,实现了多波段共光成像。提出了基于SMICGS像元响应特性的光谱串扰校正方法,开发了基于传感器信息与作物生长监测模型耦合的处理系统,实现了作物光谱信息的实时在线处理。田间试验表明,SMICGS结合机器学习算法随机森林(random forest, RF)获得的植被指数在预测小麦和水稻叶面积指数(LAI)和地上生物量(AGB)方面取得了较好的效果。对于小麦,LAI和AGB预测模型的R2和RMSE分别为0.81和0.85,0.682和1.127 t/ha。对于水稻,LAI和AGB预测模型的R2和RMSE分别为0.89和0.93,0.818和0.866 t/ha。总体而言,SMICGS为实时、无损地监测田间作物生长信息提供了可靠的基础工具,为农业生产的精确管理提供了巨大的潜力。
{"title":"SMICGS: A novel snapshot multispectral imaging sensor for quantitative monitoring of crop growth.","authors":"Yongxian Wang, Mingchao Shao, Jiacheng Wang, Jingwei An, Jianshuang Wu, Xia Yao, Xiaohu Zhang, Chongya Jiang, Tao Cheng, Yongchao Tian, Weixing Cao, Dong Zhou, Yan Zhu","doi":"10.1016/j.plaphe.2025.100056","DOIUrl":"10.1016/j.plaphe.2025.100056","url":null,"abstract":"<p><p>Unmanned aerial vehicle (UAV)-based multispectral imaging is one of the most widely used technologies for rapid crop monitoring, essential for crop-growth management. However, the technology's complex optical structure and difficulty in interpreting real-time crop-growth information seriously restrict its application. This paper presents a newly designed UAV-based snapshot multispectral imaging crop-growth sensor (SMICGS) aimed at simplifying the optical structure and realizing the online interpretation of crop spectral information. Mosaic filters based on the special spectral characteristics of crops were designed to achieve multiband co-optical imaging. A spectral crosstalk correction method based on the pixel response characteristics of SMICGS was proposed, and a processing system based on the coupling of sensor information and crop-growth monitoring models was developed to realize real-time online processing of crop spectral information. Field experiments showed that the vegetation indices obtained by SMICGS combined with the machine learning algorithm random forest (RF) achieved better results in predicting leaf area index (LAI) and above-ground biomass (AGB) for wheat and rice. For wheat, the R<sup>2</sup> and root mean square error (RMSE) values for the LAI and AGB prediction models were 0.81 and 0.85, and 0.682 and 1.127 ​t/ha, respectively. For rice, the R<sup>2</sup> and RMSE values for the LAI and AGB prediction models were 0.89 and 0.93, and 0.818 and 0.866 ​t/ha, respectively. Overall, SMICGS provides a reliable foundational tool for real-time, non-destructive monitoring of field crop growth information, offering significant potential for the precise management of agricultural production.</p>","PeriodicalId":20318,"journal":{"name":"Plant Phenomics","volume":"7 2","pages":"100056"},"PeriodicalIF":6.4,"publicationDate":"2025-05-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12710042/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145782272","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A scalable and efficient UAV-based pipeline and deep learning framework for phenotyping sorghum panicle morphology from point clouds. 基于无人机的可扩展高效管道和深度学习框架,用于从点云对高粱穗形态进行表型分析。
IF 6.4 1区 农林科学 Q1 AGRONOMY Pub Date : 2025-05-19 eCollection Date: 2025-06-01 DOI: 10.1016/j.plaphe.2025.100050
Chrisbin James, Shekhar S Chandra, Scott C Chapman

Sorghum canopy architecture in field trials is determined by various phenotypic traits, such plant and panicle count, leaf density and angle and panicle morphology, and canopy height. These traits together affect light capture and biomass production as well as conversion of photosynthates to grain yield. Panicle morphology exhibits considerable variation as influenced by genetics, environmental conditions and management practices. This study presents a framework for the 3D reconstruction of sorghum canopies and phenotyping panicle morphology. First, we developed a scalable, low-altitude Unmanned Aerial Vehicle (UAV)-based protocol that leverages videos for efficient data acquisition, combined with Neural Radiance Fields (NeRF)s to generate high-quality 3D point cloud reconstructions of sorghum canopies. Next, a 3D model was built to simulate 3D sorghum canopies to create annotated datasets for training deep learning-based semantic segmentation and panicle detection algorithms. Finally, we propose SegVoteNet, a novel multi-task deep learning model that integrates VoteNet and PointNet++ within a shared backbone architecture. Designed for semantic segmentation and 3D detection on pure point cloud data, SegVoteNet incorporates a voting and sampling module that leverages segmentation results to optimize object proposal generation. SegVoteNet is robust, achieving 0.986 Mean Average Precision (mAP) @ 0.5 Intersection Over Union (IOU) on synthetic datasets, and 0.850 mAP @ 0.5 IOU on real point cloud datasets for sorghum panicle detection, without fine-tuning. This set of pipelines provides a robust scalable method for phenotyping sorghum panicles in field trials in breeding and commercial applications. Further work is developing a capability to estimate grain number per panicle, which would provide breeders with additional phenotypes to select.

大田试验中高粱的冠层结构是由多种表型性状决定的,如株数和穗数、叶密度、叶角和穗形、冠层高度等。这些性状共同影响光捕获和生物量生产以及光合产物转化为粮食产量。穗部形态受遗传、环境条件和管理方式的影响,表现出较大的变异。本研究提出了高粱冠层三维重建和穗形态表型分析的框架。首先,我们开发了一种可扩展的、基于低空无人机(UAV)的协议,该协议利用视频进行有效的数据采集,并结合神经辐射场(NeRF)来生成高质量的高粱冠层3D点云重建。其次,建立三维模型,模拟三维高粱冠层,创建带注释的数据集,用于训练基于深度学习的语义分割和穗部检测算法。最后,我们提出了SegVoteNet,这是一种新的多任务深度学习模型,将VoteNet和PointNet++集成在共享骨干网架构中。SegVoteNet专为纯点云数据的语义分割和3D检测而设计,它结合了一个投票和采样模块,利用分割结果来优化对象提案生成。SegVoteNet具有鲁棒性,在合成数据集上实现0.986 Mean Average Precision (mAP) @ 0.5 Intersection Over Union (IOU),在真实点云数据集上实现0.850 mAP @ 0.5 IOU,用于高粱穗检测,无需微调。这一套管道为高粱穗型在育种和商业应用中的田间试验提供了一种可靠的可扩展方法。进一步的工作是开发一种估计每穗粒数的能力,这将为育种者提供额外的表型选择。
{"title":"A scalable and efficient UAV-based pipeline and deep learning framework for phenotyping sorghum panicle morphology from point clouds.","authors":"Chrisbin James, Shekhar S Chandra, Scott C Chapman","doi":"10.1016/j.plaphe.2025.100050","DOIUrl":"10.1016/j.plaphe.2025.100050","url":null,"abstract":"<p><p>Sorghum canopy architecture in field trials is determined by various phenotypic traits, such plant and panicle count, leaf density and angle and panicle morphology, and canopy height. These traits together affect light capture and biomass production as well as conversion of photosynthates to grain yield. Panicle morphology exhibits considerable variation as influenced by genetics, environmental conditions and management practices. This study presents a framework for the 3D reconstruction of sorghum canopies and phenotyping panicle morphology. First, we developed a scalable, low-altitude Unmanned Aerial Vehicle (UAV)-based protocol that leverages videos for efficient data acquisition, combined with Neural Radiance Fields (NeRF)s to generate high-quality 3D point cloud reconstructions of sorghum canopies. Next, a 3D model was built to simulate 3D sorghum canopies to create annotated datasets for training deep learning-based semantic segmentation and panicle detection algorithms. Finally, we propose SegVoteNet, a novel multi-task deep learning model that integrates VoteNet and PointNet++ within a shared backbone architecture. Designed for semantic segmentation and 3D detection on pure point cloud data, SegVoteNet incorporates a voting and sampling module that leverages segmentation results to optimize object proposal generation. SegVoteNet is robust, achieving 0.986 Mean Average Precision (mAP) @ 0.5 Intersection Over Union (IOU) on synthetic datasets, and 0.850 mAP @ 0.5 IOU on real point cloud datasets for sorghum panicle detection, without fine-tuning. This set of pipelines provides a robust scalable method for phenotyping sorghum panicles in field trials in breeding and commercial applications. Further work is developing a capability to estimate grain number per panicle, which would provide breeders with additional phenotypes to select.</p>","PeriodicalId":20318,"journal":{"name":"Plant Phenomics","volume":"7 2","pages":"100050"},"PeriodicalIF":6.4,"publicationDate":"2025-05-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12709908/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145782465","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
PodNet: Pod real-time instance segmentation in pre-harvest soybean fields. 豆荚网:大豆收获前田豆荚实时实例分割。
IF 6.4 1区 农林科学 Q1 AGRONOMY Pub Date : 2025-05-19 eCollection Date: 2025-06-01 DOI: 10.1016/j.plaphe.2025.100052
Shuo Zhou, Qixin Sun, Ning Zhang, Xiujuan Chai, Tan Sun

Noninvasive analysis of pod phenotypic traits under field conditions is crucial for soybean breeding research. However, previous pod phenotyping studies focused on postharvest materials or were limited to indoor scenarios, failing to generalize to real-field environments. To address these issues, this paper employs an instance segmentation approach for the precise extraction of the pod area from multiplant RGB images in preharvest soybean fields. We first introduce a cost-effective workflow for constructing datasets of densely planted crop images with a uniform backdrop. Starting with video recording, high-quality static frames are collected by automatic selection. Then, a large vision model is explored to facilitate dense annotation and build a large-scale soybean dataset comprising 20k pod masks. Second, the pod instance segmentation model PodNet is developed based on the YOLOv8 architecture. We propose a novel hierarchical prototype aggregation strategy to fuse multiscale semantic features and a U-EMA prototype generation network to improve the model's perception performance for small objects. Comprehensive experiments suggest that lightweight PodNet achieves a superior mean average accuracy of 0.786 in the custom pod segmentation dataset. PodNet also performs competitively on in-field images without a backdrop and enables real-time inference on the edge computing platform. To the best of our knowledge, PodNet is the first pod instance segmentation model for preharvest fields. The low-cost and high-precision extraction of pods is not only a prerequisite for phenotypic analysis of the pod organs but also constitutes an important foundation in conducting cross-scale phenotyping from whole-plant to seed levels.

大田条件下豆荚表型性状的无创分析是大豆育种研究的重要内容。然而,先前的荚果表型研究主要集中在采后材料或仅限于室内情况,未能推广到实际环境。为了解决这些问题,本文采用实例分割方法,从收获前大豆多株RGB图像中精确提取豆荚区域。我们首先介绍了一种具有成本效益的工作流,用于构建具有均匀背景的密集种植作物图像数据集。从视频录制开始,通过自动选择收集高质量的静态帧。然后,探索大视觉模型,方便密集标注,构建包含20k豆荚掩模的大规模大豆数据集。其次,基于YOLOv8架构开发pod实例分割模型PodNet。我们提出了一种新的分层原型聚合策略来融合多尺度语义特征,并提出了一个U-EMA原型生成网络来提高模型对小对象的感知性能。综合实验表明,轻量级PodNet在自定义pod分割数据集中取得了0.786的平均准确率。PodNet还在没有背景的现场图像上具有竞争力,并在边缘计算平台上实现实时推理。据我们所知,PodNet是第一个用于预收获字段的pod实例分割模型。低成本、高精度的荚果提取不仅是进行荚果器官表型分析的前提条件,也是开展从整株到种子水平的跨尺度表型分析的重要基础。
{"title":"PodNet: Pod real-time instance segmentation in pre-harvest soybean fields.","authors":"Shuo Zhou, Qixin Sun, Ning Zhang, Xiujuan Chai, Tan Sun","doi":"10.1016/j.plaphe.2025.100052","DOIUrl":"10.1016/j.plaphe.2025.100052","url":null,"abstract":"<p><p>Noninvasive analysis of pod phenotypic traits under field conditions is crucial for soybean breeding research. However, previous pod phenotyping studies focused on postharvest materials or were limited to indoor scenarios, failing to generalize to real-field environments. To address these issues, this paper employs an instance segmentation approach for the precise extraction of the pod area from multiplant RGB images in preharvest soybean fields. We first introduce a cost-effective workflow for constructing datasets of densely planted crop images with a uniform backdrop. Starting with video recording, high-quality static frames are collected by automatic selection. Then, a large vision model is explored to facilitate dense annotation and build a large-scale soybean dataset comprising 20k pod masks. Second, the pod instance segmentation model PodNet is developed based on the YOLOv8 architecture. We propose a novel hierarchical prototype aggregation strategy to fuse multiscale semantic features and a U-EMA prototype generation network to improve the model's perception performance for small objects. Comprehensive experiments suggest that lightweight PodNet achieves a superior mean average accuracy of 0.786 in the custom pod segmentation dataset. PodNet also performs competitively on in-field images without a backdrop and enables real-time inference on the edge computing platform. To the best of our knowledge, PodNet is the first pod instance segmentation model for preharvest fields. The low-cost and high-precision extraction of pods is not only a prerequisite for phenotypic analysis of the pod organs but also constitutes an important foundation in conducting cross-scale phenotyping from whole-plant to seed levels.</p>","PeriodicalId":20318,"journal":{"name":"Plant Phenomics","volume":"7 2","pages":"100052"},"PeriodicalIF":6.4,"publicationDate":"2025-05-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12710035/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145781873","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
XFruitSeg-A general plant fruit segmentation model based on CT imaging. XFruitSeg-A基于CT成像的通用植物果实分割模型。
IF 6.4 1区 农林科学 Q1 AGRONOMY Pub Date : 2025-05-15 eCollection Date: 2025-06-01 DOI: 10.1016/j.plaphe.2025.100055
Yuwei Lu, Xiaolong Kong, Li Yu, Lejun Yu, Qian Liu

Identification of the phenotypes of fruits is critical for understanding complex genetic traits. Computed tomography (CT) imaging technology enables the noninvasive acquisition of three-dimensional images of fruit interiors, thus providing a robust data foundation for phenotypic analysis. Accurate segmentation of internal fruit tissues is essential, as it directly influences the accuracy and reliability of the results. Current methods are not optimized for the unique features of plant fruit images. This study introduces XFruitSeg, which is a general deep learning model for segmenting plant fruit CT images. The model uses a U-shaped encoder-decoder architecture and integrates multitask learning. A large convolutional kernel network, RepLKNet, expands the receptive field for feature extraction. Multiscale skip connections and a deep supervision mechanism improve the model's capacity to learn features of various sizes, and a contour feature learning branch specifically targets the interorganizational boundaries. An optimized composite loss function enhances the model's robustness when applied to imbalanced categories. Additionally, a dataset named XrayFruitData was established, which contains high-resolution images of twelve plant fruit varieties, with accurate annotations for orange, mangosteen, and durian fruits for model evaluation. Compared with four mainstream advanced models, XFruitSeg achieved superior segmentation performance on the orange, mangosteen, and durian datasets, with mean Dice coefficients of 95.21 ​%, 93.24 ​%, and 94.70 ​% and mean intersection over union (mIoU) scores of 91.09 ​%, 87.91 ​%, and 90.35 ​%, respectively. The results of extensive ablation experiments demonstrate the effectiveness of each component. Therefore, the proposed XFruitSeg model has been proven to be beneficial for high-precision analysis of internal fruit phenotyping traits.

果实表型的鉴定是理解复杂遗传性状的关键。计算机断层扫描(CT)成像技术能够无创地获取水果内部的三维图像,从而为表型分析提供了强大的数据基础。水果内部组织的准确分割至关重要,因为它直接影响结果的准确性和可靠性。目前的方法没有针对植物果实图像的独特特征进行优化。本研究引入了XFruitSeg,这是一种用于植物果实CT图像分割的通用深度学习模型。该模型采用u型编码器-解码器结构,并集成了多任务学习。一个大型卷积核网络,RepLKNet,扩展了特征提取的接受域。多尺度跳跃连接和深度监督机制提高了模型学习不同规模特征的能力,轮廓特征学习分支专门针对组织间边界。优化后的复合损失函数增强了模型对不平衡类别的鲁棒性。此外,建立了一个名为XrayFruitData的数据集,该数据集包含12个植物水果品种的高分辨率图像,并对橙子、山竹和榴莲水果进行了准确的注释,用于模型评估。与4种主流先进模型相比,XFruitSeg在柑橘、山竹和榴莲数据集上的分割效果更好,平均Dice系数分别为95.21%、93.24%和94.70%,平均mIoU分数分别为91.09%、87.91%和90.35%。广泛的烧蚀实验结果证明了各组分的有效性。因此,所建立的XFruitSeg模型有利于果实内部表型性状的高精度分析。
{"title":"XFruitSeg-A general plant fruit segmentation model based on CT imaging.","authors":"Yuwei Lu, Xiaolong Kong, Li Yu, Lejun Yu, Qian Liu","doi":"10.1016/j.plaphe.2025.100055","DOIUrl":"10.1016/j.plaphe.2025.100055","url":null,"abstract":"<p><p>Identification of the phenotypes of fruits is critical for understanding complex genetic traits. Computed tomography (CT) imaging technology enables the noninvasive acquisition of three-dimensional images of fruit interiors, thus providing a robust data foundation for phenotypic analysis. Accurate segmentation of internal fruit tissues is essential, as it directly influences the accuracy and reliability of the results. Current methods are not optimized for the unique features of plant fruit images. This study introduces XFruitSeg, which is a general deep learning model for segmenting plant fruit CT images. The model uses a U-shaped encoder-decoder architecture and integrates multitask learning. A large convolutional kernel network, RepLKNet, expands the receptive field for feature extraction. Multiscale skip connections and a deep supervision mechanism improve the model's capacity to learn features of various sizes, and a contour feature learning branch specifically targets the interorganizational boundaries. An optimized composite loss function enhances the model's robustness when applied to imbalanced categories. Additionally, a dataset named XrayFruitData was established, which contains high-resolution images of twelve plant fruit varieties, with accurate annotations for orange, mangosteen, and durian fruits for model evaluation. Compared with four mainstream advanced models, XFruitSeg achieved superior segmentation performance on the orange, mangosteen, and durian datasets, with mean Dice coefficients of 95.21 ​%, 93.24 ​%, and 94.70 ​% and mean intersection over union (mIoU) scores of 91.09 ​%, 87.91 ​%, and 90.35 ​%, respectively. The results of extensive ablation experiments demonstrate the effectiveness of each component. Therefore, the proposed XFruitSeg model has been proven to be beneficial for high-precision analysis of internal fruit phenotyping traits.</p>","PeriodicalId":20318,"journal":{"name":"Plant Phenomics","volume":"7 2","pages":"100055"},"PeriodicalIF":6.4,"publicationDate":"2025-05-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12709941/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145782300","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Syndrome "basses richesses" disease induced structural deformations and sectorial distribution of photoassimilates in sugar beet taproot revealed by combined MRI-PET imaging. 联合MRI-PET成像揭示了“富糖”综合征引起的甜菜主根结构变形和光同化物的部门分布。
IF 6.4 1区 农林科学 Q1 AGRONOMY Pub Date : 2025-05-15 eCollection Date: 2025-06-01 DOI: 10.1016/j.plaphe.2025.100053
Kwabena Agyei, Justus Detring, Ralf Metzner, Gregor Huber, Daniel Pflugfelder, Omid Eini, Mark Varrelmann, Anne-Katrin Mahlein, Robert Koller

The disease syndrome "basses richesses" (SBR) leads to a significant reduction in sugar beet biomass and sugar content, negatively affecting the sugar economy. The mechanistic understanding regarding growth and photoassimilates distribution within the sugar beet taproot diseased with SBR is currently incomplete. We combined two tomographic methods, magnetic resonance imaging (MRI) and positron emission tomography (PET) using 11C as tracer, to non-invasively determine SBR effects on structural growth and photoassimilates distribution within the developing taproot over six weeks. MRI analysis revealed a deformed cross-sectional anatomical structure from an early stage, as well as a reduction in taproot volume and width of inner cambium ring structures of up to 26 and 24 ​%, respectively. These SBR disease effects were also confirmed by post-harvest analysis of the taproot. PET analysis revealed a heterogeneous distribution of labeled photoassimilates for diseased plants: sectors of the taproot with characteristic SBR symptoms showed little to very low 11C tracer signal. The heterogeneity of SBR disease effects is most likely due to a partial inoculation of leaves leading to an uneven distribution of the SBR pathogen in the taproot through the strong vascular interconnection between shoot and root. Also, the pathogen needs to spread non-uniformly within the taproot to explain the observed marked increase of the SBR disease effects over time. Our results indicate that SBR affects photoassimilates sink capacity at an early stage of taproot development. Co-registration of MRI and PET may support an early judging of susceptibility and selection of promising genotype candidates for future breeding programs.

疾病综合征“ bases richess ” (SBR)导致甜菜生物量和糖含量显著减少,对糖经济产生负面影响。目前对SBR病甜菜主根生长和光同化物分布的机理了解尚不完整。我们结合磁共振成像(MRI)和正电子发射断层扫描(PET)两种层析成像方法,使用11C作为示踪剂,在6周内无创地确定SBR对发育中的主根结构生长和光同化物分布的影响。MRI分析显示早期横截面解剖结构变形,主根体积和内形成层环结构宽度分别减少26%和24%。这些SBR病害效应也被主根收获后分析证实。PET分析显示,患病植物的标记光同化物分布不均:具有SBR特征症状的主根部分显示很少到非常低的11C示踪信号。SBR病害效应的异质性很可能是由于叶片部分接种导致SBR病原菌在主根中通过茎与根之间的强维管互联而分布不均匀所致。此外,病原体需要在主根内不均匀地传播,以解释观察到的SBR疾病效应随着时间的推移而显著增加。结果表明,SBR对主根发育早期光同化汇容量有影响。MRI和PET的联合登记可以支持早期的易感性判断和未来育种计划中有希望的候选基因型的选择。
{"title":"Syndrome \"basses richesses\" disease induced structural deformations and sectorial distribution of photoassimilates in sugar beet taproot revealed by combined MRI-PET imaging.","authors":"Kwabena Agyei, Justus Detring, Ralf Metzner, Gregor Huber, Daniel Pflugfelder, Omid Eini, Mark Varrelmann, Anne-Katrin Mahlein, Robert Koller","doi":"10.1016/j.plaphe.2025.100053","DOIUrl":"10.1016/j.plaphe.2025.100053","url":null,"abstract":"<p><p>The disease syndrome \"basses richesses\" (SBR) leads to a significant reduction in sugar beet biomass and sugar content, negatively affecting the sugar economy. The mechanistic understanding regarding growth and photoassimilates distribution within the sugar beet taproot diseased with SBR is currently incomplete. We combined two tomographic methods, magnetic resonance imaging (MRI) and positron emission tomography (PET) using <sup>11</sup>C as tracer, to non-invasively determine SBR effects on structural growth and photoassimilates distribution within the developing taproot over six weeks. MRI analysis revealed a deformed cross-sectional anatomical structure from an early stage, as well as a reduction in taproot volume and width of inner cambium ring structures of up to 26 and 24 ​%, respectively. These SBR disease effects were also confirmed by post-harvest analysis of the taproot. PET analysis revealed a heterogeneous distribution of labeled photoassimilates for diseased plants: sectors of the taproot with characteristic SBR symptoms showed little to very low <sup>11</sup>C tracer signal. The heterogeneity of SBR disease effects is most likely due to a partial inoculation of leaves leading to an uneven distribution of the SBR pathogen in the taproot through the strong vascular interconnection between shoot and root. Also, the pathogen needs to spread non-uniformly within the taproot to explain the observed marked increase of the SBR disease effects over time. Our results indicate that SBR affects photoassimilates sink capacity at an early stage of taproot development. Co-registration of MRI and PET may support an early judging of susceptibility and selection of promising genotype candidates for future breeding programs.</p>","PeriodicalId":20318,"journal":{"name":"Plant Phenomics","volume":"7 2","pages":"100053"},"PeriodicalIF":6.4,"publicationDate":"2025-05-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12709939/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145782280","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Boosting leaf trait estimation from reflectance spectra by elucidating the transferability of PLSR models. 通过阐明PLSR模型的可转移性,提高叶片性状的反射率估计。
IF 6.4 1区 农林科学 Q1 AGRONOMY Pub Date : 2025-05-15 eCollection Date: 2025-06-01 DOI: 10.1016/j.plaphe.2025.100054
Jiatong Wang, Xiaoqiang Liu, Xiaotian Qi, Xiaoyong Wu, Yilin Long, Yuhao Feng, Qi Dong, Jiabo Yan, Liwen Huang, Yue Luo, Mengqi Cao, Kai Xu, Changming Zhao, Yang Wang, Tianyu Hu, Jin Wu, Lingli Liu, Yanjun Su

Leaf spectroscopy, combined with partial least squares regression (PLSR), is recognized as an efficient and precise tool for measuring plant leaf traits. However, the feasibility of developing a generalizable model remains unclear, primarily due to limited understanding of PLSR model transferability. Here, we collected six key leaf traits along with paired leaf reflectance spectra from 1967 samples of 349 tree species in eight forest sites across China. Using this dataset, we explored the transferability of PLSR models, factors affecting model transferability, and the feasibility of developing generalizable PLSR models for leaf trait prediction. Overall, PLSR models trained at a specific study site demonstrate limited transferability to other study sites. Dissimilarities in plant evolutionary history and environmental conditions between study sites are the primary factors influencing the transferability of PLSR models. Incorporating training data from diverse evolutionary histories and environmental conditions can improve the transferability of PLSR models, achieving accuracy equivalent to that of site-specific models. Our findings provide guidelines for the use of spectroscopy in leaf trait prediction and underscore the urgent need for collaborative efforts to build an open database of leaf traits and reflectance spectra, thereby promoting the development of universal PLSR models for plant leaf trait prediction.

叶片光谱与偏最小二乘回归(PLSR)相结合,是一种高效、精确的测量植物叶片性状的工具。然而,开发一个可推广模型的可行性仍然不清楚,主要是由于对PLSR模型可移植性的了解有限。本文收集了中国8个森林样地349种树种的1967个样品的6个关键叶片特征及其叶片反射率光谱。利用该数据集,我们探讨了PLSR模型的可移植性、影响模型可移植性的因素,以及建立可推广的PLSR模型用于叶片性状预测的可行性。总体而言,在特定研究地点训练的PLSR模型显示有限的可移植性到其他研究地点。研究地点间植物进化史和环境条件的差异是影响PLSR模型可转移性的主要因素。结合来自不同进化历史和环境条件的训练数据可以提高PLSR模型的可移植性,达到与特定地点模型相当的精度。本研究结果为利用光谱技术进行叶片性状预测提供了指导,并强调了建立叶片性状和反射率光谱开放数据库的迫切需要,从而促进植物叶片性状预测通用PLSR模型的发展。
{"title":"Boosting leaf trait estimation from reflectance spectra by elucidating the transferability of PLSR models.","authors":"Jiatong Wang, Xiaoqiang Liu, Xiaotian Qi, Xiaoyong Wu, Yilin Long, Yuhao Feng, Qi Dong, Jiabo Yan, Liwen Huang, Yue Luo, Mengqi Cao, Kai Xu, Changming Zhao, Yang Wang, Tianyu Hu, Jin Wu, Lingli Liu, Yanjun Su","doi":"10.1016/j.plaphe.2025.100054","DOIUrl":"10.1016/j.plaphe.2025.100054","url":null,"abstract":"<p><p>Leaf spectroscopy, combined with partial least squares regression (PLSR), is recognized as an efficient and precise tool for measuring plant leaf traits. However, the feasibility of developing a generalizable model remains unclear, primarily due to limited understanding of PLSR model transferability. Here, we collected six key leaf traits along with paired leaf reflectance spectra from 1967 samples of 349 tree species in eight forest sites across China. Using this dataset, we explored the transferability of PLSR models, factors affecting model transferability, and the feasibility of developing generalizable PLSR models for leaf trait prediction. Overall, PLSR models trained at a specific study site demonstrate limited transferability to other study sites. Dissimilarities in plant evolutionary history and environmental conditions between study sites are the primary factors influencing the transferability of PLSR models. Incorporating training data from diverse evolutionary histories and environmental conditions can improve the transferability of PLSR models, achieving accuracy equivalent to that of site-specific models. Our findings provide guidelines for the use of spectroscopy in leaf trait prediction and underscore the urgent need for collaborative efforts to build an open database of leaf traits and reflectance spectra, thereby promoting the development of universal PLSR models for plant leaf trait prediction.</p>","PeriodicalId":20318,"journal":{"name":"Plant Phenomics","volume":"7 2","pages":"100054"},"PeriodicalIF":6.4,"publicationDate":"2025-05-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12709916/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145782463","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
RGB imaging and computer vision-based approaches for identifying spike number loci for wheat. 基于RGB成像和计算机视觉的小麦穗数位点识别方法。
IF 6.4 1区 农林科学 Q1 AGRONOMY Pub Date : 2025-05-13 eCollection Date: 2025-06-01 DOI: 10.1016/j.plaphe.2025.100051
Lei Li, Muhammad Adeel Hassan, Duoxia Wang, Guoliang Wan, Sahila Beegum, Awais Rasheed, Xianchun Xia, Yong He, Yong Zhang, Zhonghu He, Jindong Liu, Yonggui Xiao

The spike number (SN) is an important trait that significantly impacts grain yield in wheat. Manual counting of SN is time-consuming, hindering large-scale breeding efforts. Hence, there is an urgent need to develop efficient and accurate methodologies for SN counting. A YOLOX algorithm was used to determine the optimal growth stage for developing wheat spike detection models among recombinant inbred lines (RILs) across Zhongmai 175 ​× ​Lunxuan 987 and a diverse panel of 166 cultivars. We subsequently increased the precision of spike identification by developing a new YOLOX-P algorithm that incorporates the convolutional block attention module and increasing the resolution of the input images. We also used these SN data to identify underlying loci in the Zhongmai 578 ​× ​Jimai 22 RIL population. The results revealed that the late grain-filling stage presented the highest precision among the SN detection models, with accuracies ranging from 91.8 to 95.02 ​%. The improved YOLOX-P algorithm demonstrated higher mean average precision scores (5.30-5.99 ​%) and F1 scores (0.06) than did the YOLOX algorithm when it was applied to the same subsets. Three new SN loci, namely, QSN.caas-4A2, QSN.caas-4D and QSN.caas-5B2, were identified using the 50k SNP arrays. Two kompetitive allele-specific PCR markers linked with QSN.caas-4A2 and QSN.caas-5B2 were developed, and their genetic effects were validated in a diverse panel of 166 cultivars. These findings provide useful tools for high-throughput identification of SNs and novel loci in wheat.

穗数是影响小麦产量的一个重要性状。人工计数SN耗时,阻碍了大规模育种工作。因此,迫切需要开发高效、准确的SN计数方法。利用YOLOX算法确定了中麦175 ×伦选987重组自交系(RILs)和166个不同品种间小麦穗粒检测模型的最佳生育期。随后,我们开发了一种新的YOLOX-P算法,该算法结合了卷积块注意模块并提高了输入图像的分辨率,从而提高了峰值识别的精度。我们还利用这些SN数据确定了中麦578 ×吉麦22 RIL群体的潜在位点。结果表明,籽粒灌浆后期的SN检测模型精度最高,准确率为91.8% ~ 95.02%。改进的YOLOX- p算法在相同子集上的平均精度分数(5.30- 5.99%)和F1分数(0.06)均高于YOLOX算法。利用50k SNP阵列鉴定出三个新的SN位点,分别为QSN.caas-4A2、QSN.caas-4D和QSN.caas-5B2。建立了QSN.caas-4A2和QSN.caas-5B2两个竞争性等位基因特异性PCR标记,并在166个品种的不同群体中对其遗传效应进行了验证。这些发现为小麦SNs和新位点的高通量鉴定提供了有用的工具。
{"title":"RGB imaging and computer vision-based approaches for identifying spike number loci for wheat.","authors":"Lei Li, Muhammad Adeel Hassan, Duoxia Wang, Guoliang Wan, Sahila Beegum, Awais Rasheed, Xianchun Xia, Yong He, Yong Zhang, Zhonghu He, Jindong Liu, Yonggui Xiao","doi":"10.1016/j.plaphe.2025.100051","DOIUrl":"10.1016/j.plaphe.2025.100051","url":null,"abstract":"<p><p>The spike number (SN) is an important trait that significantly impacts grain yield in wheat. Manual counting of SN is time-consuming, hindering large-scale breeding efforts. Hence, there is an urgent need to develop efficient and accurate methodologies for SN counting. A YOLOX algorithm was used to determine the optimal growth stage for developing wheat spike detection models among recombinant inbred lines (RILs) across Zhongmai 175 ​× ​Lunxuan 987 and a diverse panel of 166 cultivars. We subsequently increased the precision of spike identification by developing a new YOLOX-P algorithm that incorporates the convolutional block attention module and increasing the resolution of the input images. We also used these SN data to identify underlying loci in the Zhongmai 578 ​× ​Jimai 22 RIL population. The results revealed that the late grain-filling stage presented the highest precision among the SN detection models, with accuracies ranging from 91.8 to 95.02 ​%. The improved YOLOX-P algorithm demonstrated higher mean average precision scores (5.30-5.99 ​%) and F1 scores (0.06) than did the YOLOX algorithm when it was applied to the same subsets. Three new SN loci, namely, <i>QSN</i>.<i>caas-4A2, QSN</i>.<i>caas-4D</i> and <i>QSN</i>.<i>caas-5B2</i>, were identified using the 50k SNP arrays. Two kompetitive allele-specific PCR markers linked with <i>QSN</i>.<i>caas-4A2</i> and <i>QSN</i>.<i>caas-5B2</i> were developed, and their genetic effects were validated in a diverse panel of 166 cultivars. These findings provide useful tools for high-throughput identification of SNs and novel loci in wheat.</p>","PeriodicalId":20318,"journal":{"name":"Plant Phenomics","volume":"7 2","pages":"100051"},"PeriodicalIF":6.4,"publicationDate":"2025-05-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12710032/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145781992","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Monitoring and Risk Prediction of Low-Temperature Stress in Strawberries through Fusion of Multisource Phenotypic Spatial Variability Features. 基于多源表型空间变异特征融合的草莓低温胁迫监测与风险预测
IF 6.4 1区 农林科学 Q1 AGRONOMY Pub Date : 2025-05-06 eCollection Date: 2025-06-01 DOI: 10.1016/j.plaphe.2025.100041
Nan Jiang, Zaiqiang Yang, Hanqi Zhang, Chengjing Zhang, Canyue Wang, Na Wang, Chao Xu

Capturing crop physiological information by phenotyping is a key trend in smart agriculture. However, current studies underutilize spatial structural information in phenotypic imaging. To evaluate the feasibility of crop cold stress monitoring based on phenotypic spatial variability, we conducted controlled experiments on 'Toyonoka' strawberry plants under four dynamic cooling gradients and three stress durations and analyzed the dependence of their photosynthetic physiology and phenotypic traits on temperature-time interactions. The results revealed that NPQ/1D-Parallel/TENT, Y(NO)/2D-Region/INEM, and qP/1D-Parallel/TENT presented the highest mutual information, with the maximum net photosynthetic rate (Pmax), relative electrolyte conductivity (REC), and total chlorophyll content (Chla ​+ ​b), respectively. The difference between the Photosynthetic Physiological Potential Index (PPPI) and relative negative accumulated temperature (RNAT)/650 effectively was used to calculate the cold damage risk (CDRI). An XGBoost-based model integrating the PPPI and RNAT outperformed AdaBoost and RandomForest, achieving an R2 of 0.98, an RMSE of 0.337, a classification accuracy of 92.13 ​%, and a Kappa coefficient of 0.904. qP/1D-Parallel/TENT contributed the most to the model. This study provides a scientific basis for phenotypic information mining and agro-meteorological disaster monitoring.

通过表型分析获取作物生理信息是智能农业的一个关键趋势。然而,目前的研究在表型成像中没有充分利用空间结构信息。为了评估基于表型空间变异的作物冷胁迫监测的可行性,本研究以丰冈草莓为研究对象,在4种动态降温梯度和3种胁迫持续时间下进行了对照试验,分析了其光合生理和表型性状对温度-时间互作的依赖性。结果表明,NPQ/1D-Parallel/TENT、Y(NO)/2D-Region/INEM和qP/1D-Parallel/TENT互信息最高,净光合速率(Pmax)、相对电解质电导率(REC)和总叶绿素含量(Chla + b)分别最大。利用光合生理电位指数(PPPI)与相对负积温(RNAT)/650的差值有效地计算了冷害风险(CDRI)。结合PPPI和RNAT的基于xgboost的模型优于AdaBoost和RandomForest, R2为0.98,RMSE为0.337,分类准确率为92.13%,Kappa系数为0.904。qP/1D-Parallel/TENT对模型的贡献最大。该研究为表型信息挖掘和农业气象灾害监测提供了科学依据。
{"title":"Monitoring and Risk Prediction of Low-Temperature Stress in Strawberries through Fusion of Multisource Phenotypic Spatial Variability Features.","authors":"Nan Jiang, Zaiqiang Yang, Hanqi Zhang, Chengjing Zhang, Canyue Wang, Na Wang, Chao Xu","doi":"10.1016/j.plaphe.2025.100041","DOIUrl":"10.1016/j.plaphe.2025.100041","url":null,"abstract":"<p><p>Capturing crop physiological information by phenotyping is a key trend in smart agriculture. However, current studies underutilize spatial structural information in phenotypic imaging. To evaluate the feasibility of crop cold stress monitoring based on phenotypic spatial variability, we conducted controlled experiments on 'Toyonoka' strawberry plants under four dynamic cooling gradients and three stress durations and analyzed the dependence of their photosynthetic physiology and phenotypic traits on temperature-time interactions. The results revealed that NPQ/1D-Parallel/TENT, Y(NO)/2D-Region/INEM, and qP/1D-Parallel/TENT presented the highest mutual information, with the maximum net photosynthetic rate (P<sub>max</sub>), relative electrolyte conductivity (REC), and total chlorophyll content (Chl<sub>a ​+ ​b</sub>), respectively. The difference between the Photosynthetic Physiological Potential Index (PPPI) and relative negative accumulated temperature (RNAT)/650 effectively was used to calculate the cold damage risk (CDRI). An XGBoost-based model integrating the PPPI and RNAT outperformed AdaBoost and RandomForest, achieving an R<sup>2</sup> of 0.98, an RMSE of 0.337, a classification accuracy of 92.13 ​%, and a Kappa coefficient of 0.904. qP/1D-Parallel/TENT contributed the most to the model. This study provides a scientific basis for phenotypic information mining and agro-meteorological disaster monitoring.</p>","PeriodicalId":20318,"journal":{"name":"Plant Phenomics","volume":"7 2","pages":"100041"},"PeriodicalIF":6.4,"publicationDate":"2025-05-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12710058/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145781850","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Analysis of variance and its sources in UAV-based multi-view thermal imaging of wheat plots. 基于无人机的麦田多视点热成像方差及其来源分析
IF 6.4 1区 农林科学 Q1 AGRONOMY Pub Date : 2025-04-30 eCollection Date: 2025-06-01 DOI: 10.1016/j.plaphe.2025.100046
Simon Treier, Lukas Roth, Andreas Hund, Helge Aasen, Lilia Levy Häner, Nicolas Vuille-Dit-Bille, Achim Walter, Juan M Herrera

Canopy temperature (CT) estimates from drone-based uncooled thermal cameras are prone to confounding effects, which affects the interpretability of CT estimates. Experimental sources of variance, such as genotypes and experimental treatments blend with confounding sources of variance such as thermal drift, spatial field trends, and effects related to viewing geometry. Nevertheless, CT is gaining popularity to characterize crop performance and crop water use, and as a proxy measurement of stomatal conductance and transpiration. Drone-based thermography was therefore proposed to measure CT in agricultural experiments. For a meaningful interpretation of CT, confounding sources of variance must be considered. In this study, the multi-view approach was applied to examine the variance components of CT on 99 flights with a drone-based thermal camera. Flights were conducted on two variety testing field trials of winter wheat over two years with contrasting meteorological conditions in the temperate climate of Switzerland. It was demonstrated how experimental sources of variance can be disentangled from confounding sources of variance and on average more than 96.5 ​% of the initial variance could be explained with experimental and confounding sources combined. Not considering confounding sources led to erroneous conclusions about phenotypic correlations of CT with traits such as yield, plant height, fractional canopy cover, and multispectral indices. Based on extensive and diverse data, this study provides comprehensive insights into the manifold sources of variance in CT measurements, which supports the planning and interpretation of drone-based CT screenings in variety testing, breeding, and research.

基于无人机的非制冷热像仪的冠层温度(CT)估计容易受到混淆效应的影响,这影响了CT估计的可解释性。实验方差源,如基因型和实验处理,与热漂移、空间场趋势和与观察几何相关的效应等混杂方差源混合在一起。尽管如此,CT在表征作物性能和作物水分利用方面越来越受欢迎,并作为气孔导度和蒸腾的替代测量。因此,提出了基于无人机的热成像技术来测量农业实验中的CT。为了对CT进行有意义的解释,必须考虑混杂的方差源。在本研究中,采用多视角方法,利用基于无人机的热像仪检测了99次飞行的CT方差成分。在瑞士温带气候的不同气象条件下,对两种冬小麦品种进行了为期两年的飞行试验。它证明了如何将实验方差源与混杂方差源分离开来,并且平均超过96.5%的初始方差可以用实验和混杂源相结合来解释。由于没有考虑混杂源,导致CT与产量、株高、冠层覆盖度分数和多光谱指数等性状的表型相关性得出错误结论。基于广泛而多样的数据,本研究提供了对CT测量中多种方差来源的全面见解,这支持了在品种测试、育种和研究中基于无人机的CT筛选的计划和解释。
{"title":"Analysis of variance and its sources in UAV-based multi-view thermal imaging of wheat plots.","authors":"Simon Treier, Lukas Roth, Andreas Hund, Helge Aasen, Lilia Levy Häner, Nicolas Vuille-Dit-Bille, Achim Walter, Juan M Herrera","doi":"10.1016/j.plaphe.2025.100046","DOIUrl":"10.1016/j.plaphe.2025.100046","url":null,"abstract":"<p><p>Canopy temperature (CT) estimates from drone-based uncooled thermal cameras are prone to confounding effects, which affects the interpretability of CT estimates. Experimental sources of variance, such as genotypes and experimental treatments blend with confounding sources of variance such as thermal drift, spatial field trends, and effects related to viewing geometry. Nevertheless, CT is gaining popularity to characterize crop performance and crop water use, and as a proxy measurement of stomatal conductance and transpiration. Drone-based thermography was therefore proposed to measure CT in agricultural experiments. For a meaningful interpretation of CT, confounding sources of variance must be considered. In this study, the multi-view approach was applied to examine the variance components of CT on 99 flights with a drone-based thermal camera. Flights were conducted on two variety testing field trials of winter wheat over two years with contrasting meteorological conditions in the temperate climate of Switzerland. It was demonstrated how experimental sources of variance can be disentangled from confounding sources of variance and on average more than 96.5 ​% of the initial variance could be explained with experimental and confounding sources combined. Not considering confounding sources led to erroneous conclusions about phenotypic correlations of CT with traits such as yield, plant height, fractional canopy cover, and multispectral indices. Based on extensive and diverse data, this study provides comprehensive insights into the manifold sources of variance in CT measurements, which supports the planning and interpretation of drone-based CT screenings in variety testing, breeding, and research.</p>","PeriodicalId":20318,"journal":{"name":"Plant Phenomics","volume":"7 2","pages":"100046"},"PeriodicalIF":6.4,"publicationDate":"2025-04-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12709991/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145782541","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
PhenoGazer: A high-throughput phenotyping system to track plant stress responses using hyperspectral reflectance, nighttime chlorophyll fluorescence and RGB imaging in controlled environments. PhenoGazer:一种在受控环境下利用高光谱反射、夜间叶绿素荧光和RGB成像来跟踪植物胁迫反应的高通量表型系统。
IF 6.4 1区 农林科学 Q1 AGRONOMY Pub Date : 2025-04-28 eCollection Date: 2025-06-01 DOI: 10.1016/j.plaphe.2025.100047
Muhammad Adeel Hassan, Christine Yao-Yun Chang

High throughput phenotyping for crop monitoring at both leaf and canopy scales is essential for understanding plant responses to various stresses. PhenoGazer, a high-throughput phenotyping system, enhances crop monitoring in controlled environments by integrating a portable hyperspectral spectrometer with eight fiber optics, four Raspberry Pi cameras, and blue LED lights. This system allows for comprehensive assessment of plant health and development. PhenoGazer features automated moveable upper and lower racks for continuous measurements. The lower rack, equipped with four blue LED lights and spectrometer fiber optics, captures blue light-induced chlorophyll fluorescence at night. The upper rack, carrying four spectrometer fiber optics and cameras, captures hyperspectral reflectance and RGB images during the day. This dual capability enables detailed evaluation of plant phenology, stress responses, and growth dynamics throughout the entire crop growth cycle. Fully automated and managed by a Raspberry Pi running Python scripts, PhenoGazer ensures precise control and data acquisition with minimal human intervention. Additionally, it includes continuous measurements through a datalogger to acquire photosynthetically active radiation (PAR), soil moisture and temperature, and features expansion capability for additional analog or digital sensors as desired by end users. To test the system, soybean plants representing three conditions, healthy well watered, healthy droughted, and diseased, were monitored to evaluate growth and stress responses. PhenoGazer successfully phenotyped plants under different conditions in a walk-in growth chamber. By combining nighttime blue light induced chlorophyll fluorescence, hyperspectral reflectance-based vegetation indices, and RGB imagery, PhenoGazer represented a significant advancement in plant phenotyping technology, enhancing our understanding of crop responses to environmental conditions and supporting optimized crop performance in research and agricultural applications.

在叶片和冠层尺度上对作物进行高通量表型监测对于了解植物对各种胁迫的反应至关重要。PhenoGazer是一种高通量表型系统,通过集成带有8个光纤、4个树莓派相机和蓝色LED灯的便携式高光谱光谱仪,增强了受控环境下的作物监测。该系统允许对植物健康和发育进行全面评估。PhenoGazer具有自动移动上下机架连续测量。下架配备了四个蓝色LED灯和光谱仪光纤,在夜间捕获蓝光诱导的叶绿素荧光。上面的机架,携带四个光谱仪光纤和照相机,在白天捕获高光谱反射率和RGB图像。这种双重能力可以在整个作物生长周期中对植物物候、胁迫反应和生长动态进行详细评估。PhenoGazer完全自动化,由运行Python脚本的树莓派管理,确保以最小的人为干预进行精确控制和数据采集。此外,它还包括通过数据记录仪进行连续测量,以获取光合有效辐射(PAR)、土壤湿度和温度,并具有根据最终用户需要扩展额外模拟或数字传感器的功能。为了测试该系统,研究人员对三种条件下的大豆植株进行了监测,分别是健康丰水、健康干旱和患病,以评估其生长和胁迫反应。PhenoGazer在步入式生长室中成功地在不同条件下对植物进行表型分析。通过结合夜间蓝光诱导的叶绿素荧光、基于高光谱反射率的植被指数和RGB图像,PhenoGazer代表了植物表型技术的重大进步,增强了我们对作物对环境条件的响应的理解,并支持优化作物在研究和农业应用中的性能。
{"title":"PhenoGazer: A high-throughput phenotyping system to track plant stress responses using hyperspectral reflectance, nighttime chlorophyll fluorescence and RGB imaging in controlled environments.","authors":"Muhammad Adeel Hassan, Christine Yao-Yun Chang","doi":"10.1016/j.plaphe.2025.100047","DOIUrl":"10.1016/j.plaphe.2025.100047","url":null,"abstract":"<p><p>High throughput phenotyping for crop monitoring at both leaf and canopy scales is essential for understanding plant responses to various stresses. PhenoGazer, a high-throughput phenotyping system, enhances crop monitoring in controlled environments by integrating a portable hyperspectral spectrometer with eight fiber optics, four Raspberry Pi cameras, and blue LED lights. This system allows for comprehensive assessment of plant health and development. PhenoGazer features automated moveable upper and lower racks for continuous measurements. The lower rack, equipped with four blue LED lights and spectrometer fiber optics, captures blue light-induced chlorophyll fluorescence at night. The upper rack, carrying four spectrometer fiber optics and cameras, captures hyperspectral reflectance and RGB images during the day. This dual capability enables detailed evaluation of plant phenology, stress responses, and growth dynamics throughout the entire crop growth cycle. Fully automated and managed by a Raspberry Pi running Python scripts, PhenoGazer ensures precise control and data acquisition with minimal human intervention. Additionally, it includes continuous measurements through a datalogger to acquire photosynthetically active radiation (PAR), soil moisture and temperature, and features expansion capability for additional analog or digital sensors as desired by end users. To test the system, soybean plants representing three conditions, healthy well watered, healthy droughted, and diseased, were monitored to evaluate growth and stress responses. PhenoGazer successfully phenotyped plants under different conditions in a walk-in growth chamber. By combining nighttime blue light induced chlorophyll fluorescence, hyperspectral reflectance-based vegetation indices, and RGB imagery, PhenoGazer represented a significant advancement in plant phenotyping technology, enhancing our understanding of crop responses to environmental conditions and supporting optimized crop performance in research and agricultural applications.</p>","PeriodicalId":20318,"journal":{"name":"Plant Phenomics","volume":"7 2","pages":"100047"},"PeriodicalIF":6.4,"publicationDate":"2025-04-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12709951/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145781876","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Plant Phenomics
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1