首页 > 最新文献

Plant Phenomics最新文献

英文 中文
LenRuler: a rice-centric method for automated radicle length measurement with multicrop validation. LenRuler:一种以水稻为中心的自动根长度测量方法,具有多组验证。
IF 6.4 1区 农林科学 Q1 AGRONOMY Pub Date : 2025-09-08 eCollection Date: 2025-09-01 DOI: 10.1016/j.plaphe.2025.100103
Jinfeng Zhao, Zeyu Hou, Hua Hua, Qianlong Nie, Yuqian Pang, Yan Ma, Xuehui Huang

Radicle length is a critical indicator of seed vigor, germination capacity, and seedling growth potential. However, existing measurement methods face challenges in automation, efficiency, and generalizability, often requiring manual intervention or re-annotation for different seed types. To address these limitations, this paper proposes an automated method, LenRuler, with a primary focus on rice seeds and validation in multiple crops. The method leverages the Segment Anything Model (SAM) as the foundational segmentation model and employs a coarse-to-fine segmentation strategy combined with Gaussian-based classification to automatically generate bounding boxes and centroids, which are then fed into SAM for precise segmentation of the seed coat and radicle. The radicle length is subsequently computed by converting the geodesic distance between the radicle skeleton's farthest endpoint and its nearest intersection with the seed coat skeleton into the true length. Experiments on the Riceseed1 dataset show that the proposed method achieves a Dice coefficient of 0.955 and a Pixel Accuracy of 0.944, demonstrating excellent segmentation performance. Radicle length measurement experiments on the Riceseed2 test set show that the Mean Absolute Error (MAE) was 0.273 ​mm and the coefficient of determination (R2) was 0.982, confirming the method's high precision for rice. On the Otherseed dataset, the predicted radicle lengths for maize (Zea mays), pearl millet (Pennisetum glaucum), and rye (Secale cereale) are consistent with the observed radicle length distributions, demonstrating strong cross-species performance. These results establish LenRuler as an accurate and automated solution for radicle length measurement in rice, with validated applicability to other crop species.

胚根长度是衡量种子活力、发芽能力和幼苗生长潜力的重要指标。然而,现有的测量方法在自动化、效率和通用性方面面临挑战,通常需要人工干预或对不同的种子类型进行重新注释。为了解决这些限制,本文提出了一种自动化方法,LenRuler,主要关注水稻种子并在多种作物中进行验证。该方法以片段任意模型(Segment Anything Model, SAM)为基础分割模型,采用粗到精的分割策略,结合高斯分类,自动生成边界框和质心,将边界框和质心输入到SAM中,对种皮和胚根进行精确分割。通过将胚根骨架最远端点与种皮骨架最近交点之间的测地线距离转换为真实长度,计算胚根长度。在Riceseed1数据集上的实验表明,该方法的Dice系数为0.955,Pixel Accuracy为0.944,具有良好的分割性能。稻根长度测定实验结果表明,平均绝对误差(MAE)为0.273 mm,测定系数(R2)为0.982,表明该方法对水稻具有较高的测定精度。在Otherseed数据集上,玉米(Zea mays)、珍珠粟(Pennisetum glaucum)和黑麦(Secale cereale)的预测胚根长度与观测到的胚根长度分布一致,表现出很强的跨物种性能。这些结果表明,LenRuler是一种精确、自动化的水稻胚根长度测量方法,并可应用于其他作物品种。
{"title":"LenRuler: a rice-centric method for automated radicle length measurement with multicrop validation.","authors":"Jinfeng Zhao, Zeyu Hou, Hua Hua, Qianlong Nie, Yuqian Pang, Yan Ma, Xuehui Huang","doi":"10.1016/j.plaphe.2025.100103","DOIUrl":"10.1016/j.plaphe.2025.100103","url":null,"abstract":"<p><p>Radicle length is a critical indicator of seed vigor, germination capacity, and seedling growth potential. However, existing measurement methods face challenges in automation, efficiency, and generalizability, often requiring manual intervention or re-annotation for different seed types. To address these limitations, this paper proposes an automated method, LenRuler, with a primary focus on rice seeds and validation in multiple crops. The method leverages the Segment Anything Model (SAM) as the foundational segmentation model and employs a coarse-to-fine segmentation strategy combined with Gaussian-based classification to automatically generate bounding boxes and centroids, which are then fed into SAM for precise segmentation of the seed coat and radicle. The radicle length is subsequently computed by converting the geodesic distance between the radicle skeleton's farthest endpoint and its nearest intersection with the seed coat skeleton into the true length. Experiments on the Riceseed1 dataset show that the proposed method achieves a Dice coefficient of 0.955 and a Pixel Accuracy of 0.944, demonstrating excellent segmentation performance. Radicle length measurement experiments on the Riceseed2 test set show that the Mean Absolute Error (MAE) was 0.273 ​mm and the coefficient of determination (R<sup>2</sup>) was 0.982, confirming the method's high precision for rice. On the Otherseed dataset, the predicted radicle lengths for maize (<i>Zea mays</i>), pearl millet (<i>Pennisetum glaucum</i>), and rye (<i>Secale cereale</i>) are consistent with the observed radicle length distributions, demonstrating strong cross-species performance. These results establish LenRuler as an accurate and automated solution for radicle length measurement in rice, with validated applicability to other crop species.</p>","PeriodicalId":20318,"journal":{"name":"Plant Phenomics","volume":"7 3","pages":"100103"},"PeriodicalIF":6.4,"publicationDate":"2025-09-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12710053/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145782257","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Exploring the depth of the maize canopy LAI detected by spectroscopy based on simulations and in situ measurements. 基于模拟和原位测量的玉米冠层LAI光谱探测深度研究。
IF 6.4 1区 农林科学 Q1 AGRONOMY Pub Date : 2025-09-07 eCollection Date: 2025-09-01 DOI: 10.1016/j.plaphe.2025.100100
Jinpeng Cheng, Jiao Wang, Dan Zhao, Fenghui Duan, Qiang Wu, Yongliang Lai, Jianbo Qi, Shuping Xiong, Hongbo Qiao, Xinming Ma, Hao Yang, Guijun Yang

The vertical distribution of leaves plays a crucial role in the growth process of maize. Understanding the vertical spectral characteristics of maize leaves is crucial for monitoring their growth. However, accurate estimation of the vertical distribution of leaf area remains a significant challenge in practical investigations. To address this, we used a 3D RTM to simulate the layered canopy spectra of maize, revealing the impact of canopy structure on remote sensing penetration depth across different growth stages and planting densities. The results of this study revealed differences in detection depth across growth stages. During the early growth stage, the depth was concentrated in the bottom 1 to 3 leaves of the canopy, reaching 1 to 4 leaves at the ear stage and 1 to 7 leaves during the grain-filling stage. The planting density had a notable effect on the detection depth at the bottom of the canopy. Moreover, compared with the other spectral bands, the near-infrared spectral range exhibited greater sensitivity to density variations. In terms of LAI inversion, a FuseBell-Hybrid model was constructed. We analyzed VIs across different planting density and canopy structural scenarios and found that compared with lower layers, increased density reduced the relative change rate in the upper leaf layers. The sensitivity patterns differed between plant architectures: VIred exhibited density-dependent sensitivity, with distinct responses between plant types, and MTVI2 demonstrated optimal performance for mid-canopy monitoring. This study highlights the influence of the heterogeneous structural characteristics of maize canopies on remote sensing detection depth during different phenological stages, providing theoretical support for enhancing multilayer crop monitoring in precision agriculture.

叶片的垂直分布在玉米生长过程中起着至关重要的作用。了解玉米叶片的垂直光谱特征对监测其生长至关重要。然而,在实际研究中,准确估计叶面积的垂直分布仍然是一个重大挑战。为了解决这一问题,我们利用三维RTM模拟玉米的层状冠层光谱,揭示了不同生长阶段和种植密度下冠层结构对遥感穿透深度的影响。本研究的结果揭示了不同生长阶段检测深度的差异。生长初期,深度集中在冠层底部的1 ~ 3片叶片,穗期为1 ~ 4片,灌浆期为1 ~ 7片。种植密度对冠层底部探测深度有显著影响。此外,与其他光谱波段相比,近红外光谱对密度变化表现出更大的敏感性。在LAI反演方面,构建了FuseBell-Hybrid模型。通过对不同种植密度和冠层结构的VIs进行分析发现,与低层相比,密度的增加降低了上层叶层的相对变化率。不同植物结构的敏感性模式不同:VIred表现出密度依赖的敏感性,在不同植物类型之间具有不同的响应,而MTVI2在冠层中期监测中表现出最佳性能。本研究突出了玉米冠层异质结构特征对不同物候阶段遥感探测深度的影响,为加强精准农业作物多层监测提供理论支持。
{"title":"Exploring the depth of the maize canopy LAI detected by spectroscopy based on simulations and in situ measurements.","authors":"Jinpeng Cheng, Jiao Wang, Dan Zhao, Fenghui Duan, Qiang Wu, Yongliang Lai, Jianbo Qi, Shuping Xiong, Hongbo Qiao, Xinming Ma, Hao Yang, Guijun Yang","doi":"10.1016/j.plaphe.2025.100100","DOIUrl":"10.1016/j.plaphe.2025.100100","url":null,"abstract":"<p><p>The vertical distribution of leaves plays a crucial role in the growth process of maize. Understanding the vertical spectral characteristics of maize leaves is crucial for monitoring their growth. However, accurate estimation of the vertical distribution of leaf area remains a significant challenge in practical investigations. To address this, we used a 3D RTM to simulate the layered canopy spectra of maize, revealing the impact of canopy structure on remote sensing penetration depth across different growth stages and planting densities. The results of this study revealed differences in detection depth across growth stages. During the early growth stage, the depth was concentrated in the bottom 1 to 3 leaves of the canopy, reaching 1 to 4 leaves at the ear stage and 1 to 7 leaves during the grain-filling stage. The planting density had a notable effect on the detection depth at the bottom of the canopy. Moreover, compared with the other spectral bands, the near-infrared spectral range exhibited greater sensitivity to density variations. In terms of LAI inversion, a FuseBell-Hybrid model was constructed. We analyzed VIs across different planting density and canopy structural scenarios and found that compared with lower layers, increased density reduced the relative change rate in the upper leaf layers. The sensitivity patterns differed between plant architectures: VIred exhibited density-dependent sensitivity, with distinct responses between plant types, and MTVI2 demonstrated optimal performance for mid-canopy monitoring. This study highlights the influence of the heterogeneous structural characteristics of maize canopies on remote sensing detection depth during different phenological stages, providing theoretical support for enhancing multilayer crop monitoring in precision agriculture.</p>","PeriodicalId":20318,"journal":{"name":"Plant Phenomics","volume":"7 3","pages":"100100"},"PeriodicalIF":6.4,"publicationDate":"2025-09-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12709897/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145782537","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Development of an automated phenotyping platform and identification of a novel QTL for drought tolerance in soybean. 大豆抗旱性自动表型平台的建立及QTL的鉴定。
IF 6.4 1区 农林科学 Q1 AGRONOMY Pub Date : 2025-09-07 eCollection Date: 2025-09-01 DOI: 10.1016/j.plaphe.2025.100102
Hakyung Kwon, Suk-Ha Lee, Moon Young Kim, Jungmin Ha

Deep understanding of slow-wilting is essential for developing drought-tolerant crops. Existing approaches to measure transpiration rates are difficult to apply to large populations due to their high cost and low throughput. To overcome these challenges, we developed a high-throughput phenotyping system that integrates a load cell sensor and an Arduino-based microcontroller device. The system tracked the transpiration rate in real time by measuring changes in the pot weight in 224 recombinant inbred lines of Taekwangkong (fast-wilting) x SS2-2 (slow-wilting) under water-restricted conditions. Among five transpiration features we determined, stress recognition time point (SRTP) and decrease in transpiration rate by stress (DTrs) are informative parameters, that are interconnected and independently affect slow-wilting as well. Quantitative trait loci (QTL) for SRTP and DTrs were identified at the same location as the major QTL for slow wilting, qSW_Gm10, identified in the previous study. Notably, we found a novel major QTL for DTrs, qDTrs_Gm04, with a LOD value of 42 and PVE of 47 ​%. As a candidate gene for qDTrs_Gm04, GmWRKY58 was selected with differential expression between the parental lines under drought conditions as well as upstream sequence variation. Our high-throughput system is of help not only to biological research but breeding programs of drought-tolerant lines.

深入了解慢萎蔫对培育耐旱作物至关重要。现有的测量蒸腾速率的方法由于成本高、产量低而难以适用于大量人口。为了克服这些挑战,我们开发了一种高通量表型系统,该系统集成了称重传感器和基于arduino的微控制器设备。该系统通过测定224个重组“泰光空”(快萎蔫)× SS2-2(慢萎蔫)自交系在限水条件下的瓶重变化,实时跟踪蒸腾速率。在我们确定的5个蒸腾特征中,胁迫识别时间点(SRTP)和胁迫降低蒸腾速率(DTrs)是影响慢萎蔫的信息参数,它们相互关联,相互独立。SRTP和DTrs的数量性状位点(QTL)与之前研究中鉴定的慢萎蔫主要QTL qSW_Gm10在同一位置。值得注意的是,我们发现了一个新的DTrs主要QTL qDTrs_Gm04,其LOD值为42,PVE为47%。作为qDTrs_Gm04的候选基因,GmWRKY58在干旱条件下在亲本间的表达存在差异,且上游序列存在差异。该高通量系统不仅有助于生物学研究,而且有助于抗旱品系的选育。
{"title":"Development of an automated phenotyping platform and identification of a novel QTL for drought tolerance in soybean.","authors":"Hakyung Kwon, Suk-Ha Lee, Moon Young Kim, Jungmin Ha","doi":"10.1016/j.plaphe.2025.100102","DOIUrl":"10.1016/j.plaphe.2025.100102","url":null,"abstract":"<p><p>Deep understanding of slow-wilting is essential for developing drought-tolerant crops. Existing approaches to measure transpiration rates are difficult to apply to large populations due to their high cost and low throughput. To overcome these challenges, we developed a high-throughput phenotyping system that integrates a load cell sensor and an Arduino-based microcontroller device. The system tracked the transpiration rate in real time by measuring changes in the pot weight in 224 recombinant inbred lines of Taekwangkong (fast-wilting) x SS2-2 (slow-wilting) under water-restricted conditions. Among five transpiration features we determined, stress recognition time point (SRTP) and decrease in transpiration rate by stress (DTrs) are informative parameters, that are interconnected and independently affect slow-wilting as well. Quantitative trait loci (QTL) for SRTP and DTrs were identified at the same location as the major QTL for slow wilting, <i>qSW_Gm10</i>, identified in the previous study. Notably, we found a novel major QTL for DTrs, <i>qDTrs_Gm04</i>, with a LOD value of 42 and PVE of 47 ​%. As a candidate gene for <i>qDTrs_Gm04</i>, <i>GmWRKY58</i> was selected with differential expression between the parental lines under drought conditions as well as upstream sequence variation. Our high-throughput system is of help not only to biological research but breeding programs of drought-tolerant lines.</p>","PeriodicalId":20318,"journal":{"name":"Plant Phenomics","volume":"7 3","pages":"100102"},"PeriodicalIF":6.4,"publicationDate":"2025-09-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12709953/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145782489","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Precise Image Color Correction Based on Dual Unmanned Aerial Vehicle Cooperative Flight. 基于双无人机协同飞行的图像精确色彩校正。
IF 6.4 1区 农林科学 Q1 AGRONOMY Pub Date : 2025-09-05 eCollection Date: 2025-09-01 DOI: 10.1016/j.plaphe.2025.100101
Xuqi Lu, Jiayang Xie, Jiayou Yan, Ji Zhou, Haiyan Cen

Color accuracy and consistency in remote sensing imagery are crucial for reliable plant health monitoring, precise growth stage identification, and stress detection. However, without effective color correction, variations in lighting and sensor sensitivity often cause color distortions between images, compromising data quality and analysis. This study introduces a novel in-flight color correction approach for RGB imagery using cooperative dual unmanned aerial vehicle (UAV) flights integrated with a color chart (CoF-CC). The method employs a master UAV equipped with an RGB camera for image acquisition and a synchronized secondary UAV carrying a ColorChecker (X-Rite) chart, ensuring persistent visibility of the chart within the imaging field of the master UAV for the calculation of a color correction matrix (CCM) for in-flight image correction. Field experiments validated the method by analyzing cross-sensor color consistency, assessing color measurement accuracy on field-grown rice leaves, and demonstrating its practical applications using rice maturity estimation as an example. The results indicated that the CCM significantly enhanced color accuracy, with a 66.1 ​% reduction in the average CIE 2000 color difference (ΔE), and improved color consistency among the six RGB sensors, with a 70.2 ​% increase in the intracluster distance. CoF-CC subsequently reduced ΔE from 18.2 to 5.0 between the corrected rice leaf color and ground-truth measurements, indicating that the color differences were nearly perceptible to the human eye. Moreover, the corrected imagery significantly enhanced the rice maturity prediction accuracy, improving the R2 from 0.28 to 0.67. In summary, the CoF-CC method standardizes RGB images across diverse lighting conditions and sensors, demonstrating robust performance in color analysis and interpretation under open-field conditions.

遥感图像的色彩准确性和一致性对于可靠的植物健康监测、精确的生长阶段识别和胁迫检测至关重要。然而,如果没有有效的色彩校正,光线和传感器灵敏度的变化通常会导致图像之间的色彩失真,从而影响数据质量和分析。本文介绍了一种基于彩色图(CoF-CC)的协同双无人机(UAV)飞行的RGB图像空中色彩校正方法。该方法采用配备RGB相机的主无人机进行图像采集,同步的副无人机携带ColorChecker (X-Rite)图表,确保主无人机成像场内图表的持续可见性,用于计算用于飞行图像校正的颜色校正矩阵(CCM)。田间试验通过分析跨传感器颜色一致性,评估田间水稻叶片颜色测量精度,并以水稻成熟度估计为例验证了该方法的实际应用。结果表明,CCM显着提高了颜色准确性,平均CIE 2000色差降低了66.1% (ΔE),并且提高了6个RGB传感器之间的颜色一致性,簇内距离增加了70.2%。CoF-CC随后将校正后的水稻叶片颜色与地面真实值之间的ΔE从18.2降低到5.0,表明人眼几乎可以察觉到颜色差异。此外,校正后的图像显著提高了水稻成熟度预测的精度,将R2从0.28提高到0.67。总之,CoF-CC方法标准化了不同照明条件和传感器下的RGB图像,在开场条件下展示了强大的色彩分析和解释性能。
{"title":"Precise Image Color Correction Based on Dual Unmanned Aerial Vehicle Cooperative Flight.","authors":"Xuqi Lu, Jiayang Xie, Jiayou Yan, Ji Zhou, Haiyan Cen","doi":"10.1016/j.plaphe.2025.100101","DOIUrl":"10.1016/j.plaphe.2025.100101","url":null,"abstract":"<p><p>Color accuracy and consistency in remote sensing imagery are crucial for reliable plant health monitoring, precise growth stage identification, and stress detection. However, without effective color correction, variations in lighting and sensor sensitivity often cause color distortions between images, compromising data quality and analysis. This study introduces a novel in-flight color correction approach for RGB imagery using cooperative dual unmanned aerial vehicle (UAV) flights integrated with a color chart (CoF-CC). The method employs a master UAV equipped with an RGB camera for image acquisition and a synchronized secondary UAV carrying a ColorChecker (X-Rite) chart, ensuring persistent visibility of the chart within the imaging field of the master UAV for the calculation of a color correction matrix (CCM) for in-flight image correction. Field experiments validated the method by analyzing cross-sensor color consistency, assessing color measurement accuracy on field-grown rice leaves, and demonstrating its practical applications using rice maturity estimation as an example. The results indicated that the CCM significantly enhanced color accuracy, with a 66.1 ​% reduction in the average CIE 2000 color difference (ΔE), and improved color consistency among the six RGB sensors, with a 70.2 ​% increase in the intracluster distance. CoF-CC subsequently reduced ΔE from 18.2 to 5.0 between the corrected rice leaf color and ground-truth measurements, indicating that the color differences were nearly perceptible to the human eye. Moreover, the corrected imagery significantly enhanced the rice maturity prediction accuracy, improving the R<sup>2</sup> from 0.28 to 0.67. In summary, the CoF-CC method standardizes RGB images across diverse lighting conditions and sensors, demonstrating robust performance in color analysis and interpretation under open-field conditions.</p>","PeriodicalId":20318,"journal":{"name":"Plant Phenomics","volume":"7 3","pages":"100101"},"PeriodicalIF":6.4,"publicationDate":"2025-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12710031/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145782357","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Global rice multiclass segmentation dataset (RiceSEG): comprehensive and diverse high-resolution RGB-annotated images for the development and benchmarking of rice segmentation algorithms. 全球水稻多类分割数据集(RiceSEG):全面多样的高分辨率rgb注释图像,用于水稻分割算法的开发和基准测试。
IF 6.4 1区 农林科学 Q1 AGRONOMY Pub Date : 2025-09-04 eCollection Date: 2025-09-01 DOI: 10.1016/j.plaphe.2025.100099
Junchi Zhou, Haozhou Wang, Yoichiro Kato, Tejasri Nampally, P Rajalakshmi, M Balram, Keisuke Katsura, Hao Lu, Yue Mu, Wanneng Yang, Yangmingrui Gao, Feng Xiao, Hongtao Chen, Yuhao Chen, Wenjuan Li, Jingwen Wang, Fenghua Yu, Jian Zhou, Wensheng Wang, Xiaochun Hu, Yuanzhu Yang, Yanfeng Ding, Wei Guo, Shouyang Liu

The development of computer vision-based rice phenotyping techniques is crucial for precision field management and accelerated breeding, which facilitate continuously advancing rice production. Among phenotyping tasks, distinguishing image components is a key prerequisite for characterizing plant growth and development at the organ scale, enabling deeper insights into ecophysiological processes. However, owing to the fine structure of rice organs and complex illumination within the canopy, this task remains highly challenging, underscoring the need for a high-quality training dataset. Such datasets are scarce, both because of a lack of large, representative collections of rice field images and because of the time-intensive nature of the annotation. To address this gap, we created the first comprehensive multiclass rice semantic segmentation dataset, RiceSEG. We gathered nearly 50,000 high-resolution, ground-based images from five major rice-growing countries (China, Japan, India, the Philippines, and Tanzania), encompassing more than 6000 genotypes across all growth stages. From these original images, 3078 representative samples were selected and annotated with six classes (background, green vegetation, senescent vegetation, panicle, weeds, and duckweed) to form the RiceSEG dataset. Notably, the subdataset from China spans all major genotypes and rice-growing environments from northeastern to southern regions. Both state-of-the-art convolutional neural networks and transformer-based semantic segmentation models were used as baselines. While these models perform reasonably well in segmenting background and green vegetation, they face difficulties during the reproductive stage, when canopy structures are more complex and when multiple classes are involved. These findings highlight the importance of our dataset for developing specialized segmentation models for rice and other crops. The RiceSEG dataset is publicly available at www.global-rice.com.

基于计算机视觉的水稻表型技术的发展对水稻的精准田间管理和加速育种具有重要意义,有利于水稻产量的持续提高。在表型任务中,区分图像成分是在器官尺度上表征植物生长发育的关键先决条件,可以更深入地了解生态生理过程。然而,由于水稻器官的精细结构和冠层内复杂的照明,这项任务仍然具有很高的挑战性,强调了对高质量训练数据集的需求。这样的数据集是稀缺的,一方面是因为缺乏大型的、有代表性的稻田图像集合,另一方面是因为注释需要大量的时间。为了解决这个问题,我们创建了第一个综合的多类大米语义分割数据集,rice eg。我们收集了来自5个主要水稻种植国(中国、日本、印度、菲律宾和坦桑尼亚)的近5万张高分辨率地面图像,涵盖了所有生长阶段的6000多种基因型。从这些原始图像中选取3078个具有代表性的样本,按背景、绿色植被、衰老植被、穗类、杂草和浮萍6类进行标注,形成稻谷数据集。值得注意的是,来自中国的子数据集涵盖了从东北到南方的所有主要基因型和水稻生长环境。使用最先进的卷积神经网络和基于变压器的语义分割模型作为基线。虽然这些模型在背景和绿色植被分割方面表现良好,但在繁殖阶段,当冠层结构更加复杂且涉及多个类时,它们面临困难。这些发现突出了我们的数据集对于开发水稻和其他作物的专门分割模型的重要性。RiceSEG数据集可在www.global-rice.com上公开获取。
{"title":"Global rice multiclass segmentation dataset (RiceSEG): comprehensive and diverse high-resolution RGB-annotated images for the development and benchmarking of rice segmentation algorithms.","authors":"Junchi Zhou, Haozhou Wang, Yoichiro Kato, Tejasri Nampally, P Rajalakshmi, M Balram, Keisuke Katsura, Hao Lu, Yue Mu, Wanneng Yang, Yangmingrui Gao, Feng Xiao, Hongtao Chen, Yuhao Chen, Wenjuan Li, Jingwen Wang, Fenghua Yu, Jian Zhou, Wensheng Wang, Xiaochun Hu, Yuanzhu Yang, Yanfeng Ding, Wei Guo, Shouyang Liu","doi":"10.1016/j.plaphe.2025.100099","DOIUrl":"10.1016/j.plaphe.2025.100099","url":null,"abstract":"<p><p>The development of computer vision-based rice phenotyping techniques is crucial for precision field management and accelerated breeding, which facilitate continuously advancing rice production. Among phenotyping tasks, distinguishing image components is a key prerequisite for characterizing plant growth and development at the organ scale, enabling deeper insights into ecophysiological processes. However, owing to the fine structure of rice organs and complex illumination within the canopy, this task remains highly challenging, underscoring the need for a high-quality training dataset. Such datasets are scarce, both because of a lack of large, representative collections of rice field images and because of the time-intensive nature of the annotation. To address this gap, we created the first comprehensive multiclass rice semantic segmentation dataset, RiceSEG. We gathered nearly 50,000 high-resolution, ground-based images from five major rice-growing countries (China, Japan, India, the Philippines, and Tanzania), encompassing more than 6000 genotypes across all growth stages. From these original images, 3078 representative samples were selected and annotated with six classes (background, green vegetation, senescent vegetation, panicle, weeds, and duckweed) to form the RiceSEG dataset. Notably, the subdataset from China spans all major genotypes and rice-growing environments from northeastern to southern regions. Both state-of-the-art convolutional neural networks and transformer-based semantic segmentation models were used as baselines. While these models perform reasonably well in segmenting background and green vegetation, they face difficulties during the reproductive stage, when canopy structures are more complex and when multiple classes are involved. These findings highlight the importance of our dataset for developing specialized segmentation models for rice and other crops. The RiceSEG dataset is publicly available at www.global-rice.com.</p>","PeriodicalId":20318,"journal":{"name":"Plant Phenomics","volume":"7 3","pages":"100099"},"PeriodicalIF":6.4,"publicationDate":"2025-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12710049/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145781897","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
TM-WSNet: A precise segmentation method for individual rubber trees based on UAV LiDAR point cloud. TM-WSNet:基于无人机激光雷达点云的橡胶树单株精确分割方法。
IF 6.4 1区 农林科学 Q1 AGRONOMY Pub Date : 2025-08-21 eCollection Date: 2025-09-01 DOI: 10.1016/j.plaphe.2025.100093
Lele Yan, Guoxiong Zhou, Miying Yan, Xiangjun Wang

Rubber products have become an important strategic resource in the global economy. However, individual rubber tree segmentation in plantation environments remains challenging due to canopy background interference and significant morphological variations among trees. To address these issues, we propose a high-precision segmentation network,TM-WSNet (Spatial Geometry Enhanced Hybrid Feature Extraction Module-Wavelet Grid Feature Fusion Encoder Segmentation Network). First, we introduce SGTramba, a hybrid feature extraction module combining Grouped Transformer and Mamba architectures, designed to reduce confusion between tree crown boundaries and surrounding vegetation or background elements. Second, we propose the WGMS encoder, which enhances structural feature recognition by applying wavelet-based spatial grid downsampling and multiscale feature fusion, effectively handling variations in canopy shape and tree height. Third, a scale optimization algorithm (SCPO) is developed to adaptively search for the optimal learning rate, addressing uneven learning across different resolution scales. We evaluate TM-WSNet on a self-constructed dataset (RubberTree) and two public datasets (ShapeNetPart and ForestSemantic), where it consistently achieves high segmentation accuracy and robustness. In practical field tests, our method accurately predicts key rubber tree parameters-height, crown width, and diameter at breast height with coefficients of determination (R2) of 1.00, 0.99, and 0.89, respectively. These results demonstrate TM-WSNet's strong potential for supporting precision rubber yield estimation and health monitoring in complex plantation environments.

橡胶制品已成为全球经济中重要的战略资源。然而,由于树冠背景的干扰和不同树种形态的显著差异,在人工林环境中对橡胶树的分割仍然具有挑战性。为了解决这些问题,我们提出了一个高精度的分割网络,TM-WSNet(空间几何增强混合特征提取模块-小波网格特征融合编码器分割网络)。首先,我们介绍了SGTramba,这是一个结合了分组变压器和曼巴架构的混合特征提取模块,旨在减少树冠边界与周围植被或背景元素之间的混淆。其次,我们提出了WGMS编码器,该编码器通过基于小波的空间网格降采样和多尺度特征融合来增强结构特征识别,有效处理树冠形状和树高的变化。第三,提出了一种尺度优化算法(SCPO),用于自适应搜索最优学习率,解决不同分辨率尺度下的不均匀学习问题。我们在一个自建数据集(RubberTree)和两个公共数据集(ShapeNetPart和ForestSemantic)上对TM-WSNet进行了评估,在这两个数据集上,TM-WSNet始终保持着较高的分割精度和鲁棒性。在实际的现场试验中,我们的方法准确地预测了橡胶树的关键参数——高度、树冠宽度和胸径,决定系数(R2)分别为1.00、0.99和0.89。这些结果表明,TM-WSNet在复杂种植环境下支持精确橡胶产量估算和健康监测方面具有强大的潜力。
{"title":"TM-WSNet: A precise segmentation method for individual rubber trees based on UAV LiDAR point cloud.","authors":"Lele Yan, Guoxiong Zhou, Miying Yan, Xiangjun Wang","doi":"10.1016/j.plaphe.2025.100093","DOIUrl":"10.1016/j.plaphe.2025.100093","url":null,"abstract":"<p><p>Rubber products have become an important strategic resource in the global economy. However, individual rubber tree segmentation in plantation environments remains challenging due to canopy background interference and significant morphological variations among trees. To address these issues, we propose a high-precision segmentation network,TM-WSNet (Spatial Geometry Enhanced Hybrid Feature Extraction Module-Wavelet Grid Feature Fusion Encoder Segmentation Network). First, we introduce SGTramba, a hybrid feature extraction module combining Grouped Transformer and Mamba architectures, designed to reduce confusion between tree crown boundaries and surrounding vegetation or background elements. Second, we propose the WGMS encoder, which enhances structural feature recognition by applying wavelet-based spatial grid downsampling and multiscale feature fusion, effectively handling variations in canopy shape and tree height. Third, a scale optimization algorithm (SCPO) is developed to adaptively search for the optimal learning rate, addressing uneven learning across different resolution scales. We evaluate TM-WSNet on a self-constructed dataset (RubberTree) and two public datasets (ShapeNetPart and ForestSemantic), where it consistently achieves high segmentation accuracy and robustness. In practical field tests, our method accurately predicts key rubber tree parameters-height, crown width, and diameter at breast height with coefficients of determination (R<sup>2</sup>) of 1.00, 0.99, and 0.89, respectively. These results demonstrate TM-WSNet's strong potential for supporting precision rubber yield estimation and health monitoring in complex plantation environments.</p>","PeriodicalId":20318,"journal":{"name":"Plant Phenomics","volume":"7 3","pages":"100093"},"PeriodicalIF":6.4,"publicationDate":"2025-08-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12709891/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145782525","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
De-occlusion models and diffusion-based data augmentation for size estimation of on-plant oriental melons. 基于去遮挡模型和扩散的数据增强方法在种植上的东方甜瓜大小估计中的应用。
IF 6.4 1区 农林科学 Q1 AGRONOMY Pub Date : 2025-08-21 eCollection Date: 2025-09-01 DOI: 10.1016/j.plaphe.2025.100097
Sungjay Kim, Xianghui Xin, Sang-Yeon Kim, Gyumin Kim, Min-Gyu Baek, Do Yeon Won, Chang Hyeon Baek, Ghiseok Kim

Accurate fruit size estimation is crucial for plant phenotyping, as it enables precise crop management and enhances agricultural productivity by providing essential data for growth and resource efficiency analysis. In this study, we estimated the size of on-plant oriental melons grown in a vertical cultivation system to address the challenges posed by leaf occlusion. Data augmentation was achieved using a diffusion model to generate synthetic leaves to cover existing fruits and create an enriched dataset. Three instance segmentation models-mask region-based convolutional neural network (CNN), Mask2Former, and detection transformer (DETR)-and six de-occlusion models derived from these architectures were implemented. These models successfully inferred both visible and occluded areas of the fruit. Notably, Amodal Mask2Former and occlusion-aware RCNN (ORCNN) achieved average precision scores of 85.92 ​% and 85.35 ​%, respectively. The inferred masks were used to estimate the height and diameter of the fruit, with Amodal Mask2Former yielding a mean absolute error of 5.46 ​mm and 4.20 ​mm and a mean absolute percentage error of 4.86 ​% and 5.33 ​%, respectively. The results indicate enhanced performance of the transformer-based Amodal Mask2Former over CNN architectures in de-occlusion tasks and size estimation. Finally, the enhancement in de-occlusion models compared to conventional models was assessed and demonstrated across occlusion ratios ranging from 0 to 70 ​%. However, generating synthetic datasets with occlusion ratios over 70 ​% remains a limitation.

准确的果实大小估计对于植物表型分析至关重要,因为它可以通过为生长和资源效率分析提供必要的数据来实现精确的作物管理和提高农业生产力。在这项研究中,我们估计了在垂直栽培系统中种植的东方甜瓜的大小,以解决叶片遮挡带来的挑战。使用扩散模型生成合成叶子以覆盖现有果实,并创建丰富的数据集,从而实现数据增强。实现了基于掩模区域的卷积神经网络(CNN)、Mask2Former和检测变压器(DETR)三种实例分割模型和基于这些架构的六种去遮挡模型。这些模型成功地推断出水果的可见区域和遮挡区域。值得注意的是,ammodal Mask2Former和闭塞感知RCNN (ORCNN)的平均准确率分别为85.92%和85.35%。利用所得掩模对果实的高度和直径进行估计,ammodal Mask2Former的平均绝对误差分别为5.46 mm和4.20 mm,平均绝对百分比误差分别为4.86%和5.33%。结果表明,与CNN架构相比,基于变压器的ammodal Mask2Former在去遮挡任务和尺寸估计方面的性能有所提高。最后,对去遮挡模型与传统模型相比的增强效果进行了评估,并在0 - 70%的遮挡率范围内进行了验证。然而,生成遮挡比超过70%的合成数据集仍然是一个限制。
{"title":"De-occlusion models and diffusion-based data augmentation for size estimation of on-plant oriental melons.","authors":"Sungjay Kim, Xianghui Xin, Sang-Yeon Kim, Gyumin Kim, Min-Gyu Baek, Do Yeon Won, Chang Hyeon Baek, Ghiseok Kim","doi":"10.1016/j.plaphe.2025.100097","DOIUrl":"10.1016/j.plaphe.2025.100097","url":null,"abstract":"<p><p>Accurate fruit size estimation is crucial for plant phenotyping, as it enables precise crop management and enhances agricultural productivity by providing essential data for growth and resource efficiency analysis. In this study, we estimated the size of on-plant oriental melons grown in a vertical cultivation system to address the challenges posed by leaf occlusion. Data augmentation was achieved using a diffusion model to generate synthetic leaves to cover existing fruits and create an enriched dataset. Three instance segmentation models-mask region-based convolutional neural network (CNN), Mask2Former, and detection transformer (DETR)-and six de-occlusion models derived from these architectures were implemented. These models successfully inferred both visible and occluded areas of the fruit. Notably, Amodal Mask2Former and occlusion-aware RCNN (ORCNN) achieved average precision scores of 85.92 ​% and 85.35 ​%, respectively. The inferred masks were used to estimate the height and diameter of the fruit, with Amodal Mask2Former yielding a mean absolute error of 5.46 ​mm and 4.20 ​mm and a mean absolute percentage error of 4.86 ​% and 5.33 ​%, respectively. The results indicate enhanced performance of the transformer-based Amodal Mask2Former over CNN architectures in de-occlusion tasks and size estimation. Finally, the enhancement in de-occlusion models compared to conventional models was assessed and demonstrated across occlusion ratios ranging from 0 to 70 ​%. However, generating synthetic datasets with occlusion ratios over 70 ​% remains a limitation.</p>","PeriodicalId":20318,"journal":{"name":"Plant Phenomics","volume":"7 3","pages":"100097"},"PeriodicalIF":6.4,"publicationDate":"2025-08-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12709959/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145782535","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
KDOSS-net: Knowledge distillation-based outpainting and semantic segmentation network for crop and weed images. KDOSS-net:基于知识蒸馏的作物和杂草图像脱色和语义分割网络。
IF 6.4 1区 农林科学 Q1 AGRONOMY Pub Date : 2025-08-20 eCollection Date: 2025-09-01 DOI: 10.1016/j.plaphe.2025.100098
Sang Hyo Cheong, Sung Jae Lee, Su Jin Im, Juwon Seo, Kang Ryoung Park

Weed management plays a crucial role in increasing crop yields. Semantic segmentation, which classifies each pixel in an image captured by a camera into categories such as crops, weeds, and background, is a widely used method in this context. However, conventional semantic segmentation methods rely solely on pixel information within the camera's field of view (FOV), hindering their ability to detect weeds outside the visible area. This limitation can lead to incomplete weed removal and inefficient herbicide application. Incorporating information beyond the FOV in crop and weed segmentation is therefore essential for effective herbicide usage. Nevertheless, existing research on crop and weed segmentation has largely overlooked this limitation. To address this issue, we propose the knowledge distillation-based outpainting and semantic segmentation network (KDOSS-Net) for crop and weed images, a novel framework that enhances segmentation accuracy by leveraging information beyond the FOV. KDOSS-Net consists of two parts: the object prediction-guided outpainting and semantic segmentation network (OPOSS-Net), which serves as the teacher model by restoring areas outside the FOV and performing semantic segmentation, and the semantic segmentation without outpainting network (SSWO-Net), which serves as the student model, directly performing segmentation without outpainting. Through knowledge distillation (KD), the student model learns from the teacher's outputs, which results in a lightweight yet highly accurate segmentation network that is suitable for deployment on agricultural robots with limited computing power. Experiments on three public datasets-Rice seedling and weed, CWFID, and BoniRob-yielded mean intersection over union (mIOU) scores of 0.6315, 0.7101, and 0.7524, respectively. These results demonstrate that KDOSS-Net achieves higher accuracy than existing state-of-the-art (SOTA) segmentation models while significantly reducing computational overhead. Furthermore, the weed information extracted using our method is automatically linked as input to the open-source large language and vision assistant (LLaVA), enabling the development of a system that recommends optimal herbicide strategies tailored to the detected weed class.

杂草管理在提高作物产量方面起着至关重要的作用。语义分割是一种广泛使用的方法,它将相机捕获的图像中的每个像素分类为作物、杂草和背景等类别。然而,传统的语义分割方法仅依赖于相机视场(FOV)内的像素信息,阻碍了它们检测可见区域外杂草的能力。这种限制会导致除草不完全和除草剂施用效率低下。因此,在作物和杂草分割中纳入视场以外的信息对于有效使用除草剂至关重要。然而,现有的作物和杂草分割研究在很大程度上忽略了这一局限性。为了解决这个问题,我们提出了基于知识蒸馏的作物和杂草图像out - painting和语义分割网络(KDOSS-Net),这是一个通过利用视场以外的信息来提高分割精度的新框架。KDOSS-Net由两部分组成:一是对象预测导向的绘出和语义分割网络(OPOSS-Net),作为教师模型,通过恢复视场外的区域并进行语义分割;二是不绘出语义分割网络(SSWO-Net),作为学生模型,直接进行分割而不进行绘出。通过知识蒸馏(KD),学生模型从教师的输出中学习,从而产生轻量级但高度精确的分割网络,适合部署在计算能力有限的农业机器人上。在水稻幼苗和杂草、CWFID和bonirob3个公开数据集上的实验结果显示,mIOU均值分别为0.6315、0.7101和0.7524。这些结果表明,KDOSS-Net比现有的最先进(SOTA)分割模型具有更高的精度,同时显着降低了计算开销。此外,使用我们的方法提取的杂草信息作为输入自动链接到开源的大型语言和视觉助手(LLaVA),使开发系统能够根据检测到的杂草类别推荐最佳除草剂策略。
{"title":"KDOSS-net: Knowledge distillation-based outpainting and semantic segmentation network for crop and weed images.","authors":"Sang Hyo Cheong, Sung Jae Lee, Su Jin Im, Juwon Seo, Kang Ryoung Park","doi":"10.1016/j.plaphe.2025.100098","DOIUrl":"10.1016/j.plaphe.2025.100098","url":null,"abstract":"<p><p>Weed management plays a crucial role in increasing crop yields. Semantic segmentation, which classifies each pixel in an image captured by a camera into categories such as crops, weeds, and background, is a widely used method in this context. However, conventional semantic segmentation methods rely solely on pixel information within the camera's field of view (FOV), hindering their ability to detect weeds outside the visible area. This limitation can lead to incomplete weed removal and inefficient herbicide application. Incorporating information beyond the FOV in crop and weed segmentation is therefore essential for effective herbicide usage. Nevertheless, existing research on crop and weed segmentation has largely overlooked this limitation. To address this issue, we propose the knowledge distillation-based outpainting and semantic segmentation network (KDOSS-Net) for crop and weed images, a novel framework that enhances segmentation accuracy by leveraging information beyond the FOV. KDOSS-Net consists of two parts: the object prediction-guided outpainting and semantic segmentation network (OPOSS-Net), which serves as the teacher model by restoring areas outside the FOV and performing semantic segmentation, and the semantic segmentation without outpainting network (SSWO-Net), which serves as the student model, directly performing segmentation without outpainting. Through knowledge distillation (KD), the student model learns from the teacher's outputs, which results in a lightweight yet highly accurate segmentation network that is suitable for deployment on agricultural robots with limited computing power. Experiments on three public datasets-Rice seedling and weed, CWFID, and BoniRob-yielded mean intersection over union (<i>mIOU</i>) scores of 0.6315, 0.7101, and 0.7524, respectively. These results demonstrate that KDOSS-Net achieves higher accuracy than existing state-of-the-art (SOTA) segmentation models while significantly reducing computational overhead. Furthermore, the weed information extracted using our method is automatically linked as input to the open-source large language and vision assistant (LLaVA), enabling the development of a system that recommends optimal herbicide strategies tailored to the detected weed class.</p>","PeriodicalId":20318,"journal":{"name":"Plant Phenomics","volume":"7 3","pages":"100098"},"PeriodicalIF":6.4,"publicationDate":"2025-08-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12710004/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145781953","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Analysis of Wheat Spike Morphological Traits by 2D Imaging. 小麦穗形态特征的二维成像分析。
IF 6.4 1区 农林科学 Q1 AGRONOMY Pub Date : 2025-08-14 eCollection Date: 2025-09-01 DOI: 10.1016/j.plaphe.2025.100096
Fujun Sun, Shusong Zheng, Zongyang Li, Qi Gao, Ni Jiang

Wheat spike morphology plays a critical role in determining grain yield and has garnered significant interest in genetics and breeding research. However, traditional measurement methods are limited to simple traits and fail to capture complex spike phenotypes with high precision, thus limiting progress in yield-related trait analysis. In this study, a deep learning pipeline, called Speakerphone, for acquiring precise wheat spike phenotypes was developed. Our pipeline achieved a mean intersection over union (mIoU) of 0.948 in spike segmentation. Additionally, the spike traits measured by our method strongly agreed with the manually measured values, with Pearson correlation coefficients of 0.9865 for spike length, 0.9753 for the number of spikelets per spike, and 0.9635 for fertile spikelets. Using experimental data of 221 wheat cultivars from various regions of Zhao County, Hebei Province, China, our pipeline extracted 45 phenotypes and analyzed their correlations with thousand-grain weight (TGW) and spike yield. Our findings indicate that precise measurements of spike area, spikelet area, and other phenotypic traits clarify the correlation between spike morphology and wheat yield. Through hierarchical clustering on the basis of spike morphology, we categorized wheat spikes into six classes and identified the phenotypic differences among these classes and their effects on TGW and yield. Furthermore, phenotypic differences among wheat cultivars from different geographical regions and over decades were revealed in this study, with an increase in the number of large-spike cultivars over time, especially in southern China. This research may help breeders understand the relationship between wheat spike morphology and yield, thus providing an important basis for future wheat breeding efforts.

小麦穗形态在籽粒产量的决定中起着至关重要的作用,在遗传育种研究中引起了极大的兴趣。然而,传统的测量方法仅限于简单的性状,无法高精度地捕获复杂的穗型,从而限制了产量相关性状分析的进展。在这项研究中,开发了一种名为Speakerphone的深度学习管道,用于获取精确的小麦穗表型。我们的管道在尖峰分割中实现了0.948的平均交联(mIoU)。此外,该方法测定的穗长、穗数和可育小穗的Pearson相关系数分别为0.9865、0.9753和0.9635,与人工测定值吻合较好。利用河北省赵县不同地区221个小麦品种的试验数据,提取了45个表型,并分析了它们与千粒重和穗产量的相关性。我们的研究结果表明,对穗面积、小穗面积和其他表型性状的精确测量阐明了穗形态与小麦产量之间的相关性。以小麦穗形态为基础,通过分层聚类,将小麦穗分为6类,分析了各类间的表型差异及其对总重和产量的影响。此外,本研究还揭示了不同地理区域和不同年代小麦品种的表型差异,随着时间的推移,大穗品种的数量增加,特别是在中国南方。本研究可帮助育种人员了解小麦穗形态与产量的关系,为今后小麦育种工作提供重要依据。
{"title":"Analysis of Wheat Spike Morphological Traits by 2D Imaging.","authors":"Fujun Sun, Shusong Zheng, Zongyang Li, Qi Gao, Ni Jiang","doi":"10.1016/j.plaphe.2025.100096","DOIUrl":"10.1016/j.plaphe.2025.100096","url":null,"abstract":"<p><p>Wheat spike morphology plays a critical role in determining grain yield and has garnered significant interest in genetics and breeding research. However, traditional measurement methods are limited to simple traits and fail to capture complex spike phenotypes with high precision, thus limiting progress in yield-related trait analysis. In this study, a deep learning pipeline, called Speakerphone, for acquiring precise wheat spike phenotypes was developed. Our pipeline achieved a mean intersection over union (mIoU) of 0.948 in spike segmentation. Additionally, the spike traits measured by our method strongly agreed with the manually measured values, with Pearson correlation coefficients of 0.9865 for spike length, 0.9753 for the number of spikelets per spike, and 0.9635 for fertile spikelets. Using experimental data of 221 wheat cultivars from various regions of Zhao County, Hebei Province, China, our pipeline extracted 45 phenotypes and analyzed their correlations with thousand-grain weight (TGW) and spike yield. Our findings indicate that precise measurements of spike area, spikelet area, and other phenotypic traits clarify the correlation between spike morphology and wheat yield. Through hierarchical clustering on the basis of spike morphology, we categorized wheat spikes into six classes and identified the phenotypic differences among these classes and their effects on TGW and yield. Furthermore, phenotypic differences among wheat cultivars from different geographical regions and over decades were revealed in this study, with an increase in the number of large-spike cultivars over time, especially in southern China. This research may help breeders understand the relationship between wheat spike morphology and yield, thus providing an important basis for future wheat breeding efforts.</p>","PeriodicalId":20318,"journal":{"name":"Plant Phenomics","volume":"7 3","pages":"100096"},"PeriodicalIF":6.4,"publicationDate":"2025-08-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12709963/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145782429","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Volumetric Deep Learning-Based Precision Phenotyping of Gene-Edited Tomato for Vertical Farming. 基于体积深度学习的垂直种植基因编辑番茄的精确表型分析。
IF 6.4 1区 农林科学 Q1 AGRONOMY Pub Date : 2025-08-14 eCollection Date: 2025-09-01 DOI: 10.1016/j.plaphe.2025.100095
Yu-Jin Jeon, Seungpyo Hong, Taek Sung Lee, Soo Hyun Park, Giha Song, Myeong-Gyun Seo, Jiwoo Lee, Yoonseo Lim, Jeong-Tak An, Sehee Lee, Ho-Young Jeong, Soon Ju Park, Chanhui Lee, Dae-Hyun Jung, Choon-Tak Kwon

Global climate change and urbanization have posed challenges to sustainable food production and resource management in agriculture. Vertical farming, in particular, allows for high-density cultivation on limited land but requires precise control of crop height to suit vertical farming systems. Tomato, a globally significant vegetable crop, urgently requires mutant varieties that suppress indeterminate growth for effective cultivation in vertical farming systems. In this study, we utilized the CRISPR-Cas9 system to develop a new tomato cultivar optimized for vertical farming by editing the Gibberellin 20-oxidase (SlGA20ox) genes, which are well known for their roles in the "Green Revolution". Additionally, we proposed a volumetric model to effectively identify mutants through non-destructive analysis of chlorophyll fluorescence. The proposed model achieved over 84 ​% classification accuracy in distinguishing triple-determinate and slga20ox gene-edited plants, outperforming traditional machine learning methods and 1D-CNN approaches. Unlike previous studies that primarily relied on manual feature extraction from chlorophyll fluorescence data, this research introduced a deep learning framework capable of automating feature extraction in three dimensions while learning the temporal characteristics of chlorophyll fluorescence imaging data. The study demonstrated the potential to classify tomato plants customized for vertical farming, leveraging advanced phenotypic analysis methods. Our approach explores new analytical methods for chlorophyll fluorescence imaging data within AI-based phenotyping and can be extended to other crops and traits, accelerating breeding programs and enhancing the efficiency of genetic resource management.

全球气候变化和城市化对可持续粮食生产和农业资源管理提出了挑战。特别是垂直农业,允许在有限的土地上高密度种植,但需要精确控制作物高度以适应垂直农业系统。番茄作为一种全球重要的蔬菜作物,迫切需要抑制不确定性生长的突变品种,以便在垂直种植系统中有效种植。在本研究中,我们利用CRISPR-Cas9系统,通过编辑在“绿色革命”中发挥重要作用的赤霉素20-氧化酶(SlGA20ox)基因,培育出一种适合垂直种植的番茄新品种。此外,我们提出了一个体积模型,通过叶绿素荧光的无损分析有效地识别突变体。该模型在区分triple-determinate和slga20ox基因编辑植物方面的分类准确率超过84%,优于传统的机器学习方法和1D-CNN方法。与以往的研究主要依赖于人工从叶绿素荧光数据中提取特征不同,本研究引入了一种深度学习框架,能够在学习叶绿素荧光成像数据的时间特征的同时,在三维空间中自动提取特征。该研究展示了利用先进的表型分析方法对垂直农业定制的番茄植物进行分类的潜力。我们的方法探索了基于人工智能表型分析的叶绿素荧光成像数据的新分析方法,并可扩展到其他作物和性状,加快育种计划,提高遗传资源管理效率。
{"title":"Volumetric Deep Learning-Based Precision Phenotyping of Gene-Edited Tomato for Vertical Farming.","authors":"Yu-Jin Jeon, Seungpyo Hong, Taek Sung Lee, Soo Hyun Park, Giha Song, Myeong-Gyun Seo, Jiwoo Lee, Yoonseo Lim, Jeong-Tak An, Sehee Lee, Ho-Young Jeong, Soon Ju Park, Chanhui Lee, Dae-Hyun Jung, Choon-Tak Kwon","doi":"10.1016/j.plaphe.2025.100095","DOIUrl":"10.1016/j.plaphe.2025.100095","url":null,"abstract":"<p><p>Global climate change and urbanization have posed challenges to sustainable food production and resource management in agriculture. Vertical farming, in particular, allows for high-density cultivation on limited land but requires precise control of crop height to suit vertical farming systems. Tomato, a globally significant vegetable crop, urgently requires mutant varieties that suppress indeterminate growth for effective cultivation in vertical farming systems. In this study, we utilized the CRISPR-Cas9 system to develop a new tomato cultivar optimized for vertical farming by editing the <i>Gibberellin 20-oxidase</i> (<i>SlGA20ox</i>) genes, which are well known for their roles in the \"Green Revolution\". Additionally, we proposed a volumetric model to effectively identify mutants through non-destructive analysis of chlorophyll fluorescence. The proposed model achieved over 84 ​% classification accuracy in distinguishing triple-determinate and <i>slga20ox</i> gene-edited plants, outperforming traditional machine learning methods and 1D-CNN approaches. Unlike previous studies that primarily relied on manual feature extraction from chlorophyll fluorescence data, this research introduced a deep learning framework capable of automating feature extraction in three dimensions while learning the temporal characteristics of chlorophyll fluorescence imaging data. The study demonstrated the potential to classify tomato plants customized for vertical farming, leveraging advanced phenotypic analysis methods. Our approach explores new analytical methods for chlorophyll fluorescence imaging data within AI-based phenotyping and can be extended to other crops and traits, accelerating breeding programs and enhancing the efficiency of genetic resource management.</p>","PeriodicalId":20318,"journal":{"name":"Plant Phenomics","volume":"7 3","pages":"100095"},"PeriodicalIF":6.4,"publicationDate":"2025-08-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12710025/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145782451","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Plant Phenomics
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1