首页 > 最新文献

Plant Phenomics最新文献

英文 中文
PhenoRob-F: An autonomous ground-based robot for high-throughput phenotyping of field crops. PhenoRob-F:用于田间作物高通量表型分析的自主地面机器人。
IF 6.4 1区 农林科学 Q1 AGRONOMY Pub Date : 2025-08-13 eCollection Date: 2025-09-01 DOI: 10.1016/j.plaphe.2025.100085
Meng Yang, Zhengda Li, Jiale Cui, Yang Shao, Ruifang Zhai, Wen Qiao, Wanneng Yang, Peng Song

Understanding the genetic basis of quantitative traits related to crop growth, yield, and stress response requires the acquisition of large-scale, high-quality phenotypic datasets. High-throughput phenotyping platforms have become effective tools for meeting this requirement. Autonomous mobile robots have gained prominence owing to their ability to carry heavy payloads, their operational flexibility, and their proximity to crops, which allows for higher imaging resolution. In this study, we introduce PhenoRob-F (a phenotyping robot for the field), a cross-row, wheeled robot designed for efficient and automated phenotyping under field conditions. The mobile platform and phenotyping module of the robot were engineered to meet the specific demands of field phenotyping, with integrated visual and satellite navigation systems enabling autonomous operation. We validated the performance of the robot through a series of experiments involving various crop canopies. By capturing RGB images of rice and wheat, we independently performed wheat ear detection and rice panicle segmentation. For wheat ear detection, we achieve a precision of 0.783, a recall of 0.822, and a mean average precision (mAP) of 0.853 when the YOLOv8m model is used. For rice panicle segmentation, the SegFormer_B0 model yielded a mean intersection over union (mIoU) of 0.949 and an accuracy of 0.987. Additionally, by capturing RGB-D data of maize canopies, we performed 3D reconstructions to calculate plant height, achieving an R2 of 0.99 compared with manual measurements. Similar experiments with rapeseed yielded an R2 of 0.97. Near-infrared spectral data collected from drought-stressed rice plants enabled the classification of drought severity into five categories, with classification accuracies ranging from 0.977 to 0.996. Our results reveal that PhenoRob-F is an effective tool for high-throughput phenotyping and is capable of providing precise data to support phenotypic trait analysis and the selection of superior crop genotypes.

了解与作物生长、产量和胁迫反应相关的数量性状的遗传基础需要获得大规模、高质量的表型数据集。高通量表型平台已成为满足这一需求的有效工具。自主移动机器人因其携带重型有效载荷的能力,操作灵活性以及与作物的接近性而获得突出地位,从而允许更高的成像分辨率。在这项研究中,我们介绍了PhenoRob-F(一种田间表型机器人),这是一种交叉排轮式机器人,旨在在田间条件下进行高效和自动化的表型分析。机器人的移动平台和表型模块根据现场表型的具体需求进行了设计,集成了视觉和卫星导航系统,实现了自主操作。我们通过一系列涉及不同作物冠层的实验来验证机器人的性能。通过获取水稻和小麦的RGB图像,分别进行了麦穗检测和稻穗分割。使用YOLOv8m模型时,我们的小麦穗检测精度为0.783,召回率为0.822,平均精度(mAP)为0.853。对于水稻穗段分割,SegFormer_B0模型的平均交联(intersection over union, mIoU)为0.949,精度为0.987。此外,通过获取玉米冠层RGB-D数据,我们进行了三维重建来计算植物高度,与人工测量相比,R2为0.99。在油菜籽上进行类似试验,R2为0.97。利用干旱胁迫水稻近红外光谱数据将干旱严重程度划分为5类,分类精度为0.977 ~ 0.996。结果表明,PhenoRob-F是一种高效的高通量表型分析工具,能够为表型性状分析和优良作物基因型的选择提供精确的数据支持。
{"title":"PhenoRob-F: An autonomous ground-based robot for high-throughput phenotyping of field crops.","authors":"Meng Yang, Zhengda Li, Jiale Cui, Yang Shao, Ruifang Zhai, Wen Qiao, Wanneng Yang, Peng Song","doi":"10.1016/j.plaphe.2025.100085","DOIUrl":"10.1016/j.plaphe.2025.100085","url":null,"abstract":"<p><p>Understanding the genetic basis of quantitative traits related to crop growth, yield, and stress response requires the acquisition of large-scale, high-quality phenotypic datasets. High-throughput phenotyping platforms have become effective tools for meeting this requirement. Autonomous mobile robots have gained prominence owing to their ability to carry heavy payloads, their operational flexibility, and their proximity to crops, which allows for higher imaging resolution. In this study, we introduce PhenoRob-F (a phenotyping robot for the field), a cross-row, wheeled robot designed for efficient and automated phenotyping under field conditions. The mobile platform and phenotyping module of the robot were engineered to meet the specific demands of field phenotyping, with integrated visual and satellite navigation systems enabling autonomous operation. We validated the performance of the robot through a series of experiments involving various crop canopies. By capturing RGB images of rice and wheat, we independently performed wheat ear detection and rice panicle segmentation. For wheat ear detection, we achieve a precision of 0.783, a recall of 0.822, and a mean average precision (mAP) of 0.853 when the YOLOv8m model is used. For rice panicle segmentation, the SegFormer_B0 model yielded a mean intersection over union (mIoU) of 0.949 and an accuracy of 0.987. Additionally, by capturing RGB-D data of maize canopies, we performed 3D reconstructions to calculate plant height, achieving an R<sup>2</sup> of 0.99 compared with manual measurements. Similar experiments with rapeseed yielded an R<sup>2</sup> of 0.97. Near-infrared spectral data collected from drought-stressed rice plants enabled the classification of drought severity into five categories, with classification accuracies ranging from 0.977 to 0.996. Our results reveal that PhenoRob-F is an effective tool for high-throughput phenotyping and is capable of providing precise data to support phenotypic trait analysis and the selection of superior crop genotypes.</p>","PeriodicalId":20318,"journal":{"name":"Plant Phenomics","volume":"7 3","pages":"100085"},"PeriodicalIF":6.4,"publicationDate":"2025-08-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12709880/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145782440","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
ChatLeafDisease: a chain-of-thought prompting approach for crop disease classification using large language models. ChatLeafDisease:一种使用大型语言模型进行作物病害分类的思维链提示方法。
IF 6.4 1区 农林科学 Q1 AGRONOMY Pub Date : 2025-08-07 eCollection Date: 2025-09-01 DOI: 10.1016/j.plaphe.2025.100094
Jiandong Pan, Renhai Zhong, Fulin Xia, Jingfeng Huang, Linchao Zhu, Yi Yang, Tao Lin

Accurate crop disease classification is essential for disease management to support food security. Deep learning has shown its high classification accuracy in image-based disease identification. However, the deep learning approach usually needs large amounts of data for training to achieve satisfactory performance, which hindering its application and scalability for different crops. Large language models (LLMs) have shown strong generation capability and zero-shot performance. While how to utilize the LLM technique for crop disease classification remains unclear. In this study, we developed a training-free framework named ChatLeafDisease (ChatLD) based on GPT-4o model with chain-of-thought (CoT) prompting for crop disease classification. The framework includes a disease description database to provide knowledge of crop diseases and a disease classification agent guided by CoT prompts to understand the patterns of leaves infected diseases and classify the disease. The original GPT-4o model, Gemini model, and Contrastive Language-Image Pre-training (CLIP) model were chosen as baselines. Results showed that the ChatLD framework achieved higher and more stable classification accuracy (88.9 ​%) for six tomato diseases than the GPT-4o (45.9 ​%), Gemini (56.1%), and CLIP (64.3 ​%) models. We found that the scoring rules enabled the ChatLD framework to capture the typical differences across diseases. Ablation results showed that the CoT prompts integrated the scoring rules and important notes to enable the ChatLD to achieve high classification accuracy. Comparison between different description texts showed that condensed disease description improved the classification performance. The results showed that the ChatLD framework achieved high accuracy for the disease classes of new crops, highlighting its scalability across various crop diseases. The proposed framework provided a new LLM-based alternative for crop disease classification by only using the textual descriptions of disease without training process.

准确的作物病害分类对于支持粮食安全的病害管理至关重要。深度学习在基于图像的疾病识别中显示出了较高的分类准确率。然而,深度学习方法通常需要大量的数据进行训练才能达到令人满意的性能,这阻碍了其在不同作物上的应用和可扩展性。大型语言模型(llm)显示出强大的生成能力和零射击性能。而如何利用LLM技术进行作物病害分类还不清楚。在这项研究中,我们开发了一个基于gpt - 40模型和思维链(CoT)提示的作物病害分类免训练框架ChatLeafDisease (ChatLD)。该框架包括提供作物病害知识的病害描述数据库和以CoT提示符为指导的病害分类代理,以了解叶片侵染病害的模式并对病害进行分类。选择原始gpt - 40模型、Gemini模型和对比语言图像预训练(CLIP)模型作为基线。结果表明,ChatLD框架对6种番茄病害的分类准确率(88.9%)高于gpt - 40(45.9%)、Gemini(56.1%)和CLIP(64.3%)模型。我们发现评分规则使ChatLD框架能够捕获疾病之间的典型差异。消融结果表明,CoT提示整合了评分规则和重要注意事项,使ChatLD能够达到较高的分类精度。不同描述文本的比较表明,精简的疾病描述提高了分类性能。结果表明,ChatLD框架对新作物的疾病类别具有较高的准确性,突出了其在各种作物疾病中的可扩展性。该框架提供了一种新的基于llm的作物病害分类方法,该方法仅使用病害的文本描述,无需训练过程。
{"title":"ChatLeafDisease: a chain-of-thought prompting approach for crop disease classification using large language models.","authors":"Jiandong Pan, Renhai Zhong, Fulin Xia, Jingfeng Huang, Linchao Zhu, Yi Yang, Tao Lin","doi":"10.1016/j.plaphe.2025.100094","DOIUrl":"10.1016/j.plaphe.2025.100094","url":null,"abstract":"<p><p>Accurate crop disease classification is essential for disease management to support food security. Deep learning has shown its high classification accuracy in image-based disease identification. However, the deep learning approach usually needs large amounts of data for training to achieve satisfactory performance, which hindering its application and scalability for different crops. Large language models (LLMs) have shown strong generation capability and zero-shot performance. While how to utilize the LLM technique for crop disease classification remains unclear. In this study, we developed a training-free framework named ChatLeafDisease (ChatLD) based on GPT-4o model with chain-of-thought (CoT) prompting for crop disease classification. The framework includes a disease description database to provide knowledge of crop diseases and a disease classification agent guided by CoT prompts to understand the patterns of leaves infected diseases and classify the disease. The original GPT-4o model, Gemini model, and Contrastive Language-Image Pre-training (CLIP) model were chosen as baselines. Results showed that the ChatLD framework achieved higher and more stable classification accuracy (88.9 ​%) for six tomato diseases than the GPT-4o (45.9 ​%), Gemini (56.1%), and CLIP (64.3 ​%) models. We found that the scoring rules enabled the ChatLD framework to capture the typical differences across diseases. Ablation results showed that the CoT prompts integrated the scoring rules and important notes to enable the ChatLD to achieve high classification accuracy. Comparison between different description texts showed that condensed disease description improved the classification performance. The results showed that the ChatLD framework achieved high accuracy for the disease classes of new crops, highlighting its scalability across various crop diseases. The proposed framework provided a new LLM-based alternative for crop disease classification by only using the textual descriptions of disease without training process.</p>","PeriodicalId":20318,"journal":{"name":"Plant Phenomics","volume":"7 3","pages":"100094"},"PeriodicalIF":6.4,"publicationDate":"2025-08-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12709970/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145782477","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The Global Wheat Full Semantic Organ Segmentation (GWFSS) dataset. 全球小麦全语义器官分割数据集。
IF 6.4 1区 农林科学 Q1 AGRONOMY Pub Date : 2025-08-06 eCollection Date: 2025-09-01 DOI: 10.1016/j.plaphe.2025.100084
Zijian Wang, Radek Zenkl, Latifa Greche, Benoit De Solan, Lucas Bernigaud Samatan, Safaa Ouahid, Andrea Visioni, Carlos A Robles-Zazueta, Francisco Pinto, Ivan Perez-Olivera, Matthew P Reynolds, Chen Zhu, Shouyang Liu, Marie-Pia D'argaignon, Raul Lopez-Lozano, Marie Weiss, Afef Marzougui, Lukas Roth, Sébastien Dandrifosse, Alexis Carlier, Benjamin Dumont, Benoît Mercatoris, Javier Fernandez, Scott Chapman, Keyhan Najafian, Ian Stavness, Haozhou Wang, Wei Guo, Nicolas Virlet, Malcolm J Hawkesford, Zhi Chen, Etienne David, Joss Gillet, Kamran Irfan, Alexis Comar, Andreas Hund

Computer vision is increasingly used in farmers' fields and agricultural experiments to quantify important traits. Imaging setups with a sub-millimeter ground sampling distance enable the detection and tracking of plant features, including size, shape, and colour. Although today's AI-driven foundation models segment almost any object in an image, they still fail for complex plant canopies. To improve model performance, the global wheat dataset consortium assembled a diverse set of images from experiments around the globe. After the head detection dataset (GWHD), the new dataset targets a full semantic segmentation (GWFSS) of organs (leaves, stems and spikes) covering all developmental stages. Images were collected by 11 institutions using a wide range of imaging setups. Two datasets are provided: i) a set of 1096 diverse images in which all organs were labelled at the pixel level, and (ii) a dataset of 52,078 images without annotations available for additional training. The labelled set was used to train segmentation models based on DeepLabV3Plus and Segformer. Our Segformer model performed slightly better than DeepLabV3Plus with a mIOU for leaves and spikes of ca. 90 ​%. However, the precision for stems with 54 ​% was rather lower. The major advantages over published models are: i) the exclusion of weeds from the wheat canopy, ii) the detection of all wheat features including necrotic and senescent tissues and its separation from crop residues. This facilitates further development in classifying healthy vs. unhealthy tissue to address the increasing need for accurate quantification of senescence and diseases in wheat canopies.

计算机视觉越来越多地应用于农民田间和农业试验中,以量化重要性状。具有亚毫米地面采样距离的成像装置可以检测和跟踪植物特征,包括大小、形状和颜色。尽管今天的人工智能驱动的基础模型几乎可以分割图像中的任何物体,但它们仍然无法分割复杂的植物树冠。为了提高模型的性能,全球小麦数据集联盟从全球各地的实验中收集了一组不同的图像。在头部检测数据集(GWHD)之后,新数据集的目标是覆盖所有发育阶段的器官(叶、茎和穗)的完整语义分割(GWFSS)。图像由11个机构使用各种成像装置收集。提供了两个数据集:i)一组1096个不同的图像,其中所有器官都在像素水平上进行了标记,(ii)一个52,078个图像的数据集,没有注释,可用于额外的训练。使用标记集训练基于DeepLabV3Plus和Segformer的分割模型。我们的Segformer模型比DeepLabV3Plus稍好,叶片和穗的mIOU约为90%。然而,精确度为54%的茎是相当低的。与已发表的模型相比,该模型的主要优点是:i)排除了小麦冠层中的杂草,ii)检测了包括坏死和衰老组织在内的所有小麦特征,并将其与作物残留物分离。这有助于进一步发展对健康与不健康组织的分类,以解决对小麦冠层衰老和疾病的准确量化日益增长的需求。
{"title":"The Global Wheat Full Semantic Organ Segmentation (GWFSS) dataset.","authors":"Zijian Wang, Radek Zenkl, Latifa Greche, Benoit De Solan, Lucas Bernigaud Samatan, Safaa Ouahid, Andrea Visioni, Carlos A Robles-Zazueta, Francisco Pinto, Ivan Perez-Olivera, Matthew P Reynolds, Chen Zhu, Shouyang Liu, Marie-Pia D'argaignon, Raul Lopez-Lozano, Marie Weiss, Afef Marzougui, Lukas Roth, Sébastien Dandrifosse, Alexis Carlier, Benjamin Dumont, Benoît Mercatoris, Javier Fernandez, Scott Chapman, Keyhan Najafian, Ian Stavness, Haozhou Wang, Wei Guo, Nicolas Virlet, Malcolm J Hawkesford, Zhi Chen, Etienne David, Joss Gillet, Kamran Irfan, Alexis Comar, Andreas Hund","doi":"10.1016/j.plaphe.2025.100084","DOIUrl":"10.1016/j.plaphe.2025.100084","url":null,"abstract":"<p><p>Computer vision is increasingly used in farmers' fields and agricultural experiments to quantify important traits. Imaging setups with a sub-millimeter ground sampling distance enable the detection and tracking of plant features, including size, shape, and colour. Although today's AI-driven foundation models segment almost any object in an image, they still fail for complex plant canopies. To improve model performance, the global wheat dataset consortium assembled a diverse set of images from experiments around the globe. After the head detection dataset (GWHD), the new dataset targets a full semantic segmentation (GWFSS) of organs (leaves, stems and spikes) covering all developmental stages. Images were collected by 11 institutions using a wide range of imaging setups. Two datasets are provided: i) a set of 1096 diverse images in which all organs were labelled at the pixel level, and (ii) a dataset of 52,078 images without annotations available for additional training. The labelled set was used to train segmentation models based on DeepLabV3Plus and Segformer. Our Segformer model performed slightly better than DeepLabV3Plus with a mIOU for leaves and spikes of ca. 90 ​%. However, the precision for stems with 54 ​% was rather lower. The major advantages over published models are: i) the exclusion of weeds from the wheat canopy, ii) the detection of all wheat features including necrotic and senescent tissues and its separation from crop residues. This facilitates further development in classifying healthy vs. unhealthy tissue to address the increasing need for accurate quantification of senescence and diseases in wheat canopies.</p>","PeriodicalId":20318,"journal":{"name":"Plant Phenomics","volume":"7 3","pages":"100084"},"PeriodicalIF":6.4,"publicationDate":"2025-08-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12710005/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145782491","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multi-modal few-shot learning for anthesis prediction of individual wheat plants. 小麦单株开花期预测的多模态小片段学习。
IF 6.4 1区 农林科学 Q1 AGRONOMY Pub Date : 2025-07-21 eCollection Date: 2025-09-01 DOI: 10.1016/j.plaphe.2025.100091
Yiting Xie, Stuart J Roy, Rhiannon K Schilling, Huajian Liu

Anthesis prediction is crucial for breeding wheat. While current tools provide estimates of average anthesis at the field scale, they fail to address the needs of breeders who require accurate predictions for individual plants. Hybrid breeders have to finalize their plans for pollination at least 10 days before such flowering is due and biotechnology field trials in the United States and Australia must report to regulators 7-14 days before the first plant flowers. Currently, predicting anthesis of individual wheat plants is a labour-intensive, inefficient, and costly process. Individual wheat of the same cultivar within the same field may exhibit substantial variations in anthesis timing, due to significant variations in their immediate surroundings. In this study, we developed an efficient and cost-effective machine vision approach to predict anthesis of individual wheat plants. By integrating RGB imagery with in-situ meteorological data, our multimodal framework simplifies the anthesis prediction problem into binary or three-class classification tasks, aligning with breeders' requirements in individual wheat flowering prediction on the crucial days before anthesis. Furthermore, we incorporated a few-shot learning method to improve the model's adaptability across different growth environments and to address the challenge of limited training data. The model achieved an F1 score above 0.8 in all planting settings.

小麦开花期预测是小麦育种的关键。虽然目前的工具提供了田间平均开花的估计,但它们不能满足育种者的需求,因为他们需要对单个植物进行准确的预测。杂交育种者必须在开花前至少10天完成授粉计划,美国和澳大利亚的生物技术田间试验必须在第一批植物开花前7-14天向监管机构报告。目前,预测小麦单株开花是一项劳动密集型、低效率和昂贵的过程。由于周围环境的显著差异,同一品种的小麦在同一块地里可能在开花时间上表现出很大的差异。在这项研究中,我们开发了一种高效和经济的机器视觉方法来预测小麦单株的开花。通过将RGB图像与现场气象数据相结合,我们的多模态框架将开花预测问题简化为二元或三级分类任务,符合育种者在开花前关键天对小麦单株开花预测的要求。此外,为了提高模型在不同生长环境下的适应性,并解决训练数据有限的问题,我们还引入了少量学习方法。该模型在所有种植条件下F1得分均在0.8以上。
{"title":"Multi-modal few-shot learning for anthesis prediction of individual wheat plants.","authors":"Yiting Xie, Stuart J Roy, Rhiannon K Schilling, Huajian Liu","doi":"10.1016/j.plaphe.2025.100091","DOIUrl":"10.1016/j.plaphe.2025.100091","url":null,"abstract":"<p><p>Anthesis prediction is crucial for breeding wheat. While current tools provide estimates of average anthesis at the field scale, they fail to address the needs of breeders who require accurate predictions for individual plants. Hybrid breeders have to finalize their plans for pollination at least 10 days before such flowering is due and biotechnology field trials in the United States and Australia must report to regulators 7-14 days before the first plant flowers. Currently, predicting anthesis of individual wheat plants is a labour-intensive, inefficient, and costly process. Individual wheat of the same cultivar within the same field may exhibit substantial variations in anthesis timing, due to significant variations in their immediate surroundings. In this study, we developed an efficient and cost-effective machine vision approach to predict anthesis of individual wheat plants. By integrating RGB imagery with in-situ meteorological data, our multimodal framework simplifies the anthesis prediction problem into binary or three-class classification tasks, aligning with breeders' requirements in individual wheat flowering prediction on the crucial days before anthesis. Furthermore, we incorporated a few-shot learning method to improve the model's adaptability across different growth environments and to address the challenge of limited training data. The model achieved an F1 score above 0.8 in all planting settings.</p>","PeriodicalId":20318,"journal":{"name":"Plant Phenomics","volume":"7 3","pages":"100091"},"PeriodicalIF":6.4,"publicationDate":"2025-07-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12709996/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145782377","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
RsegNet: An Advanced Methodology for Individual Rubber Tree Segmentation and Structural Parameter Extraction from UAV LiDAR Point Clouds. RsegNet:一种先进的无人机激光雷达点云橡胶树分割与结构参数提取方法。
IF 6.4 1区 农林科学 Q1 AGRONOMY Pub Date : 2025-07-16 eCollection Date: 2025-09-01 DOI: 10.1016/j.plaphe.2025.100090
Hengrui Wang, Zilin Ye, Qin Zhang, Mingfang Wang, Guoxiong Zhou, Xiangjun Wang, Li Li, Shuqi Lin

As an important tropical cash crop, rubber trees play a key role in the rubber industry and ecosystem. However, a significant challenge in precision agriculture and refined management of rubber plantation lies in the limitations of traditional point cloud segmentation methods, which struggle to accurately extract structural parameters and capture the spatial layout of individual rubber trees. Therefore, we propose an optimized dual-channel clustering method for the UAV LiDAR-based Rubber Tree Point Cloud Segmentation Network (RsegNet) for improved assessment of rubber tree architecture and traits. Firstly, we designed a cosine feature extraction network, termed CosineU-Net, to address the branch-and-leaf overlap problem by calculating the cosine similarity of the spatial and positional features of each point, leveraging deep learning approaches to improve feature representation. Secondly, we constructed a dual-channel clustering module reducing prediction error in rubber tree point cloud data, integrating multi-class association and background classification to tackle background interference. The cluster identification and separation accuracy in high-dimensional data processing is enhanced through a dynamic clustering optimization algorithm. In our self-built dataset and across five regions of the FOR-instance forest dataset, RsegNet achieved the best performance compared to five state-of-the-art networks, reaching an F-score of 86.1%. This method calculated structural attributes including height, crown diameter, and volume for rubber trees in three areas under different environments in Danzhou City, Hainan Province, providing robust support for precise monitoring, plantation management, and health assessment.

橡胶树作为一种重要的热带经济作物,在橡胶工业和生态系统中发挥着关键作用。然而,传统的点云分割方法难以准确提取橡胶树的结构参数和捕获单个橡胶树的空间布局,这是精准农业和橡胶树精细化管理面临的一个重大挑战。为此,针对基于无人机激光雷达的橡胶树点云分割网络(RsegNet),提出了一种优化的双通道聚类方法,以改进橡胶树结构和性状的评估。首先,我们设计了一个余弦特征提取网络,称为CosineU-Net,通过计算每个点的空间和位置特征的余弦相似度来解决树枝和叶子重叠问题,利用深度学习方法来改进特征表示。其次,构建了减少橡胶树点云数据预测误差的双通道聚类模块,结合多类关联和背景分类解决背景干扰问题;通过动态聚类优化算法,提高了高维数据处理中的聚类识别和分离精度。在我们的自建数据集和for实例森林数据集的五个区域中,与五个最先进的网络相比,RsegNet取得了最好的性能,达到了86.1%的f分。该方法计算了海南儋州市不同环境下3个区域橡胶树的树高、树冠直径、体积等结构属性,为精准监测、人工林管理和健康评价提供了有力支撑。
{"title":"RsegNet: An Advanced Methodology for Individual Rubber Tree Segmentation and Structural Parameter Extraction from UAV LiDAR Point Clouds.","authors":"Hengrui Wang, Zilin Ye, Qin Zhang, Mingfang Wang, Guoxiong Zhou, Xiangjun Wang, Li Li, Shuqi Lin","doi":"10.1016/j.plaphe.2025.100090","DOIUrl":"10.1016/j.plaphe.2025.100090","url":null,"abstract":"<p><p>As an important tropical cash crop, rubber trees play a key role in the rubber industry and ecosystem. However, a significant challenge in precision agriculture and refined management of rubber plantation lies in the limitations of traditional point cloud segmentation methods, which struggle to accurately extract structural parameters and capture the spatial layout of individual rubber trees. Therefore, we propose an optimized dual-channel clustering method for the UAV LiDAR-based Rubber Tree Point Cloud Segmentation Network (RsegNet) for improved assessment of rubber tree architecture and traits. Firstly, we designed a cosine feature extraction network, termed CosineU-Net, to address the branch-and-leaf overlap problem by calculating the cosine similarity of the spatial and positional features of each point, leveraging deep learning approaches to improve feature representation. Secondly, we constructed a dual-channel clustering module reducing prediction error in rubber tree point cloud data, integrating multi-class association and background classification to tackle background interference. The cluster identification and separation accuracy in high-dimensional data processing is enhanced through a dynamic clustering optimization algorithm. In our self-built dataset and across five regions of the FOR-instance forest dataset, RsegNet achieved the best performance compared to five state-of-the-art networks, reaching an F-score of 86.1%. This method calculated structural attributes including height, crown diameter, and volume for rubber trees in three areas under different environments in Danzhou City, Hainan Province, providing robust support for precise monitoring, plantation management, and health assessment.</p>","PeriodicalId":20318,"journal":{"name":"Plant Phenomics","volume":"7 3","pages":"100090"},"PeriodicalIF":6.4,"publicationDate":"2025-07-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12710007/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145782458","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Predictive modeling, pattern recognition, and spatiotemporal representations of plant growth in simulated and controlled environments: A comprehensive review. 模拟和控制环境下植物生长的预测建模、模式识别和时空表征:综述。
IF 6.4 1区 农林科学 Q1 AGRONOMY Pub Date : 2025-07-15 eCollection Date: 2025-09-01 DOI: 10.1016/j.plaphe.2025.100089
Mohamed Debbagh, Shangpeng Sun, Mark Lefsrud

Accurate predictions and representations of plant growth patterns in simulated and controlled environments are important for addressing various challenges in plant phenomics research. This review explores various works on state-of-the-art predictive pattern recognition techniques, focusing on the spatiotemporal modeling of plant traits and the integration of dynamic environmental interactions. We provide a comprehensive examination of deterministic, probabilistic, and generative modeling approaches, emphasizing their applications in high-throughput phenotyping and simulation-based plant growth forecasting. Key topics include regressions and neural network-based representation models for the task of forecasting, limitations of existing experiment-based deterministic approaches, and the need for dynamic frameworks that incorporate uncertainty and evolving environmental feedback. This review surveys advances in 2D and 3D structured data representations through functional-structural plant models and conditional generative models. We offer a perspective on opportunities for future works, emphasizing the integration of domain-specific knowledge to data-driven methods, improvements to available datasets, and the implementation of these techniques toward real-world applications.

在模拟和控制环境中准确预测和表示植物生长模式对于解决植物表型组学研究中的各种挑战至关重要。本文综述了预测模式识别技术的最新研究成果,重点介绍了植物性状的时空建模和动态环境相互作用的整合。我们提供了确定性,概率和生成建模方法的全面检查,强调它们在高通量表型和基于模拟的植物生长预测中的应用。关键主题包括预测任务的回归和基于神经网络的表示模型,现有的基于实验的确定性方法的局限性,以及对包含不确定性和不断变化的环境反馈的动态框架的需求。本文综述了通过功能结构植物模型和条件生成模型在二维和三维结构化数据表示方面的进展。我们提供了对未来工作机会的展望,强调了特定领域知识与数据驱动方法的集成,对可用数据集的改进,以及这些技术在现实世界应用中的实现。
{"title":"Predictive modeling, pattern recognition, and spatiotemporal representations of plant growth in simulated and controlled environments: A comprehensive review.","authors":"Mohamed Debbagh, Shangpeng Sun, Mark Lefsrud","doi":"10.1016/j.plaphe.2025.100089","DOIUrl":"10.1016/j.plaphe.2025.100089","url":null,"abstract":"<p><p>Accurate predictions and representations of plant growth patterns in simulated and controlled environments are important for addressing various challenges in plant phenomics research. This review explores various works on state-of-the-art predictive pattern recognition techniques, focusing on the spatiotemporal modeling of plant traits and the integration of dynamic environmental interactions. We provide a comprehensive examination of deterministic, probabilistic, and generative modeling approaches, emphasizing their applications in high-throughput phenotyping and simulation-based plant growth forecasting. Key topics include regressions and neural network-based representation models for the task of forecasting, limitations of existing experiment-based deterministic approaches, and the need for dynamic frameworks that incorporate uncertainty and evolving environmental feedback. This review surveys advances in 2D and 3D structured data representations through functional-structural plant models and conditional generative models. We offer a perspective on opportunities for future works, emphasizing the integration of domain-specific knowledge to data-driven methods, improvements to available datasets, and the implementation of these techniques toward real-world applications.</p>","PeriodicalId":20318,"journal":{"name":"Plant Phenomics","volume":"7 3","pages":"100089"},"PeriodicalIF":6.4,"publicationDate":"2025-07-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12710002/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145782367","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Seeing the unseen: A novel approach to extract latent plant root traits from digital images. 看不见的:一种从数字图像中提取潜在植物根系特征的新方法。
IF 6.4 1区 农林科学 Q1 AGRONOMY Pub Date : 2025-07-09 eCollection Date: 2025-09-01 DOI: 10.1016/j.plaphe.2025.100088
Mirza Shoaib, Adam M Dimech, Simone J Rochfort, Christopher Topp, Matthew J Hayden, Surya Kant

A novel approach, the Algorithmic Root Trait (ART) extraction method, identifies and quantifies computationally-derived plant root traits, revealing latent patterns related to dense root clusters in digital images. Using an ensemble of multiple unsupervised machine learning algorithms and a custom algorithm, 27 ARTs were extracted reflecting dense root cluster size and spatial location. These ARTs were then used independently and in combination with Traditional Root Traits (TRTs) to classify wheat genotypes differing in drought tolerance. ART-based models outperformed TRT-only models in drought classification (e.g., 96.3 ​% vs. 85.6 ​% accuracy). Combining ARTs and TRTs further improved accuracy to 97.4 ​%. Notably, 4 selected ARTs matched the performance of all 23 TRTs, offering 5.8 ​× ​higher information density (0.213 vs. 0.037 accuracy/feature). This superiority reflects the ability of ARTs to capture richer, more complex architectural information, evidenced by higher internal variability (35.59 ​± ​11.41 vs. 28.91 ​± ​14.28 for TRTs) and distinct data structures in multivariate analyses; PERMANOVA confirmed that ARTs and TRTs provide complementary insights. Validated through experiments in controlled environments and field conditions with wheat drought-tolerant and susceptible genotypes, ART offers a scalable, customisable toolset for high-throughput phenotyping of plant roots. By bridging conventional, visually derived traits with autonomous computational analyses, this method broadens root phenotyping pipelines and underscores the value of harnessing sensor data that transcends human perception. ART thus emerges as a promising framework for revealing hidden features in plant imaging, with broader applications across plant science to deepen our understanding of crop adaptation and resilience.

一种新的方法,算法根性状(ART)提取方法,识别和量化计算衍生的植物根性状,揭示与数字图像中密集根簇相关的潜在模式。使用多种无监督机器学习算法和自定义算法的集合,提取了27个反映密集根簇大小和空间位置的ARTs。然后将这些ARTs单独或与传统根系性状(trt)结合使用,对耐旱性不同的小麦基因型进行分类。基于art的模型在干旱分类方面优于仅trt的模型(例如,准确率为96.3%对85.6%)。结合art和trt进一步将准确率提高到97.4%。值得注意的是,4个选择的art与所有23个trt的性能相匹配,提供5.8倍的高信息密度(0.213比0.037精度/特征)。这种优势反映了ARTs能够捕获更丰富、更复杂的建筑信息,这体现在更高的内部变异性(trt为35.59±11.41,trt为28.91±14.28)和多元分析中不同的数据结构;PERMANOVA证实,art和trt提供了互补的见解。通过在受控环境和田间条件下对小麦耐旱和易感基因型进行实验验证,ART为植物根系的高通量表型分析提供了可扩展、可定制的工具集。通过将传统的视觉衍生特征与自主计算分析相结合,该方法拓宽了根表型管道,并强调了利用超越人类感知的传感器数据的价值。因此,ART成为揭示植物成像中隐藏特征的一个有希望的框架,在植物科学领域有更广泛的应用,以加深我们对作物适应和恢复力的理解。
{"title":"Seeing the unseen: A novel approach to extract latent plant root traits from digital images.","authors":"Mirza Shoaib, Adam M Dimech, Simone J Rochfort, Christopher Topp, Matthew J Hayden, Surya Kant","doi":"10.1016/j.plaphe.2025.100088","DOIUrl":"10.1016/j.plaphe.2025.100088","url":null,"abstract":"<p><p>A novel approach, the Algorithmic Root Trait (ART) extraction method, identifies and quantifies computationally-derived plant root traits, revealing latent patterns related to dense root clusters in digital images. Using an ensemble of multiple unsupervised machine learning algorithms and a custom algorithm, 27 ARTs were extracted reflecting dense root cluster size and spatial location. These ARTs were then used independently and in combination with Traditional Root Traits (TRTs) to classify wheat genotypes differing in drought tolerance. ART-based models outperformed TRT-only models in drought classification (e.g., 96.3 ​% vs. 85.6 ​% accuracy). Combining ARTs and TRTs further improved accuracy to 97.4 ​%. Notably, 4 selected ARTs matched the performance of all 23 TRTs, offering 5.8 ​× ​higher information density (0.213 vs. 0.037 accuracy/feature). This superiority reflects the ability of ARTs to capture richer, more complex architectural information, evidenced by higher internal variability (35.59 ​± ​11.41 vs. 28.91 ​± ​14.28 for TRTs) and distinct data structures in multivariate analyses; PERMANOVA confirmed that ARTs and TRTs provide complementary insights. Validated through experiments in controlled environments and field conditions with wheat drought-tolerant and susceptible genotypes, ART offers a scalable, customisable toolset for high-throughput phenotyping of plant roots. By bridging conventional, visually derived traits with autonomous computational analyses, this method broadens root phenotyping pipelines and underscores the value of harnessing sensor data that transcends human perception. ART thus emerges as a promising framework for revealing hidden features in plant imaging, with broader applications across plant science to deepen our understanding of crop adaptation and resilience.</p>","PeriodicalId":20318,"journal":{"name":"Plant Phenomics","volume":"7 3","pages":"100088"},"PeriodicalIF":6.4,"publicationDate":"2025-07-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12709997/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145782448","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
FSEA: Incorporating domain-specific prior knowledge for few-shot weed detection. FSEA:结合特定领域的先验知识进行少量杂草检测。
IF 6.4 1区 农林科学 Q1 AGRONOMY Pub Date : 2025-07-05 eCollection Date: 2025-09-01 DOI: 10.1016/j.plaphe.2025.100086
Jingyao Gai, Bao Lu, Shijie Liu, Mingzhang Pan, Boqian Chen, Lie Tang, Haiyan Cen

Deep learning-based crop and weed detection is essential for modern precision weed control. But its effectiveness is limited when facing newly presented weed species due to the impracticality of collecting large, balanced training datasets in field conditions. To address these challenges, this study presents a few-shot learning framework that achieves rapid and effective adaptation to new weed species by leveraging domain-specific characteristics of plant detection. We proposed few-shot enhanced attention (FSEA) network, built upon Faster R-CNN, which implements three prior knowledge in weed detection through: (1) designing a channel attention-based feature fusion module with an excess-green feature extractor to leverage color characteristics of plants and background, (2) designing a feature enhancement module to accommodate diverse plant morphologies, and (3) applying an optimized loss function designed specifically for plant occlusion scenarios. Using commonly observed crop and weed species (common beet, sugarcane, barnyard grass, field pennycress and Chinese money plant) as base classes, FSEA achieved an all-class mAP of 0.416 and a novel-class mAP of 0.346 when adapting to less frequent weed species (common purslane, Asian copperleaf, goosefoot, clover, and goosegrass), after training for 40 epochs using only 30 samples per species. This performance significantly outperforms state-of-the-art few-shot detectors (TFA, FSCE, Meta R-CNN, Meta-DETR, DCFS, DiGEO) and traditional detector YOLOv7, indicating the effectiveness of incorporating domain-specific prior knowledge into few-shot weed detection. This study provides a fundamental methodology for rapid adaptation of weed detection systems to new environments and species, making automated weed management more practical and accessible for various agricultural applications. The source code and dataset are publicly available (https://github.com/skyofyao/FSEA) to facilitate further research in this domain.

基于深度学习的作物和杂草检测对于现代精确杂草控制至关重要。但是,当面对新出现的杂草物种时,由于在野外条件下收集大型、平衡的训练数据集不现实,其有效性受到限制。为了解决这些挑战,本研究提出了一种基于植物检测领域特异性的快速有效适应新杂草物种的学习框架。我们提出了基于Faster R-CNN的few-shot enhanced attention (FSEA)网络,该网络通过三个先验知识实现了杂草检测:(1)设计基于通道注意力的特征融合模块,其中包含一个利用植物和背景颜色特征的过绿特征提取器;(2)设计一个适应不同植物形态的特征增强模块;(3)应用专门针对植物遮挡场景设计的优化损失函数。FSEA以常见的作物和杂草物种(普通甜菜、甘蔗、稗草、野地pennygrass和钱钱花)为基类,在适应不常见的杂草物种(普通马齿苋、亚洲黄叶、鹅足、三叶草和鹅草)时,经过40个epoch的训练,每个物种仅使用30个样本,获得了0.416的全类mAP和0.346的新类mAP。这种性能明显优于最先进的少射检测器(TFA、FSCE、Meta R-CNN、Meta- detr、DCFS、DiGEO)和传统检测器YOLOv7,表明将特定领域的先验知识纳入到少射杂草检测中是有效的。该研究为杂草检测系统快速适应新环境和新物种提供了一种基本方法,使自动化杂草管理在各种农业应用中更加实用和容易获得。源代码和数据集是公开的(https://github.com/skyofyao/FSEA),以促进在这个领域的进一步研究。
{"title":"FSEA: Incorporating domain-specific prior knowledge for few-shot weed detection.","authors":"Jingyao Gai, Bao Lu, Shijie Liu, Mingzhang Pan, Boqian Chen, Lie Tang, Haiyan Cen","doi":"10.1016/j.plaphe.2025.100086","DOIUrl":"10.1016/j.plaphe.2025.100086","url":null,"abstract":"<p><p>Deep learning-based crop and weed detection is essential for modern precision weed control. But its effectiveness is limited when facing newly presented weed species due to the impracticality of collecting large, balanced training datasets in field conditions. To address these challenges, this study presents a few-shot learning framework that achieves rapid and effective adaptation to new weed species by leveraging domain-specific characteristics of plant detection. We proposed few-shot enhanced attention (FSEA) network, built upon Faster R-CNN, which implements three prior knowledge in weed detection through: (1) designing a channel attention-based feature fusion module with an excess-green feature extractor to leverage color characteristics of plants and background, (2) designing a feature enhancement module to accommodate diverse plant morphologies, and (3) applying an optimized loss function designed specifically for plant occlusion scenarios. Using commonly observed crop and weed species (common beet, sugarcane, barnyard grass, field pennycress and Chinese money plant) as base classes, FSEA achieved an all-class mAP of 0.416 and a novel-class mAP of 0.346 when adapting to less frequent weed species (common purslane, Asian copperleaf, goosefoot, clover, and goosegrass), after training for 40 epochs using only 30 samples per species. This performance significantly outperforms state-of-the-art few-shot detectors (TFA, FSCE, Meta R-CNN, Meta-DETR, DCFS, DiGEO) and traditional detector YOLOv7, indicating the effectiveness of incorporating domain-specific prior knowledge into few-shot weed detection. This study provides a fundamental methodology for rapid adaptation of weed detection systems to new environments and species, making automated weed management more practical and accessible for various agricultural applications. The source code and dataset are publicly available (https://github.com/skyofyao/FSEA) to facilitate further research in this domain.</p>","PeriodicalId":20318,"journal":{"name":"Plant Phenomics","volume":"7 3","pages":"100086"},"PeriodicalIF":6.4,"publicationDate":"2025-07-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12710037/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145781924","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Panoptic segmentation for complete labeling of fruit microstructure in 3D micro-CT images with deep learning. 基于深度学习的三维显微ct图像中水果微观结构的全视分割。
IF 6.4 1区 农林科学 Q1 AGRONOMY Pub Date : 2025-07-05 eCollection Date: 2025-09-01 DOI: 10.1016/j.plaphe.2025.100087
Leen Van Doorselaer, Pieter Verboven, Bart Nicolai

Metabolic processes in plant organs involving transport of water, metabolic gasses, and nutrients depend on the three-dimensional (3D) microscopic tissue morphology. However, imaging and quantifying this microstructure, including the spatial layout of parenchyma cells, pores, vascular bundles and special features such as stone cell clusters (brachysclereids), is challenging. To address this, a 3D deep learning-based panoptic segmentation model, combining semantic and instance segmentation, was developed to accelerate and improve microstructure characterization of apple and pear fruit tissue in X-ray micro-computed tomography (CT) images. In addition, various training datasets and data augmentation techniques, including synthetic data, were explored to enhance segmentation quality. The 3D panoptic segmentation achieved an Aggregated Jaccard Index of 0.89 and 0.77 for apple and pear tissue, respectively, outperforming both the previously designed 2D instance segmentation model and a marker-based watershed segmentation benchmark. The model successfully labeled vascular bundles with a Dice Similarity Coefficient (DSC) of 0.51 in apple tissue and 0.79 in pear tissue, although thin vasculature in apple remained more challenging to segment. The 3D panoptic segmentation model achieved a DSC of 0.81 and effectively segmented stone cell clusters in pear tissue. Despite evaluating different methods to enhance segmentation quality, none improved test performance beyond that of the model trained on the standard dataset. The proposed 3D panoptic segmentation model offers the most complete automated protocol to date for plant tissue labelling and morphometric quantification from native X-ray micro-CT images, without extensive sample preparation such as contrast labelling. The developed method, if not replaces, drastically accelerates conventional human-in-the-loop analysis of such images.

植物器官的代谢过程包括水、代谢气体和营养物质的运输,依赖于三维(3D)显微组织形态。然而,成像和量化这种微观结构,包括薄壁细胞、孔隙、维管束和特殊特征,如石细胞簇(短核)的空间布局,是具有挑战性的。为了解决这个问题,开发了一种基于3D深度学习的全视分割模型,结合语义和实例分割,以加速和改进x射线微计算机断层扫描(CT)图像中苹果和梨果实组织的微观结构表征。此外,还探索了各种训练数据集和数据增强技术,包括合成数据,以提高分割质量。对于苹果和梨组织,三维全视分割的聚合Jaccard指数分别为0.89和0.77,优于之前设计的二维实例分割模型和基于标记的分水岭分割基准。该模型成功地标记了苹果组织的维管束,其Dice Similarity Coefficient (DSC)分别为0.51和0.79,但苹果组织的薄维管束仍然难以分割。三维全视分割模型的DSC值为0.81,能够有效分割梨组织中的石细胞簇。尽管评估了不同的方法来提高分割质量,但没有一种方法比在标准数据集上训练的模型更能提高测试性能。提出的3D全光分割模型提供了迄今为止最完整的自动化方案,用于植物组织标记和原生x射线微ct图像的形态计量定量,而无需大量的样品制备,如造影剂标记。开发的方法,如果不是取代,大大加快了传统的人在循环分析这类图像。
{"title":"Panoptic segmentation for complete labeling of fruit microstructure in 3D micro-CT images with deep learning.","authors":"Leen Van Doorselaer, Pieter Verboven, Bart Nicolai","doi":"10.1016/j.plaphe.2025.100087","DOIUrl":"10.1016/j.plaphe.2025.100087","url":null,"abstract":"<p><p>Metabolic processes in plant organs involving transport of water, metabolic gasses, and nutrients depend on the three-dimensional (3D) microscopic tissue morphology. However, imaging and quantifying this microstructure, including the spatial layout of parenchyma cells, pores, vascular bundles and special features such as stone cell clusters (brachysclereids), is challenging. To address this, a 3D deep learning-based panoptic segmentation model, combining semantic and instance segmentation, was developed to accelerate and improve microstructure characterization of apple and pear fruit tissue in X-ray micro-computed tomography (CT) images. In addition, various training datasets and data augmentation techniques, including synthetic data, were explored to enhance segmentation quality. The 3D panoptic segmentation achieved an Aggregated Jaccard Index of 0.89 and 0.77 for apple and pear tissue, respectively, outperforming both the previously designed 2D instance segmentation model and a marker-based watershed segmentation benchmark. The model successfully labeled vascular bundles with a Dice Similarity Coefficient (DSC) of 0.51 in apple tissue and 0.79 in pear tissue, although thin vasculature in apple remained more challenging to segment. The 3D panoptic segmentation model achieved a DSC of 0.81 and effectively segmented stone cell clusters in pear tissue. Despite evaluating different methods to enhance segmentation quality, none improved test performance beyond that of the model trained on the standard dataset. The proposed 3D panoptic segmentation model offers the most complete automated protocol to date for plant tissue labelling and morphometric quantification from native X-ray micro-CT images, without extensive sample preparation such as contrast labelling. The developed method, if not replaces, drastically accelerates conventional human-in-the-loop analysis of such images.</p>","PeriodicalId":20318,"journal":{"name":"Plant Phenomics","volume":"7 3","pages":"100087"},"PeriodicalIF":6.4,"publicationDate":"2025-07-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12709988/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145782396","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
CitrusGAN: sparse-view X-ray CT reconstruction for citrus based on generative adversarial networks. 柑橘gan:基于生成对抗网络的柑橘稀疏视图x射线CT重建。
IF 6.4 1区 农林科学 Q1 AGRONOMY Pub Date : 2025-06-26 eCollection Date: 2025-09-01 DOI: 10.1016/j.plaphe.2025.100082
Hansong Xiang, Zilong Xu, Yonghua Yu, Jiantan Yang, Shanjun Li, Yaohui Chen

3D phenotyping of the external and internal structures is important to breed new fruit species. As manual phenotyping is error-prone and time-consuming, developing high-throughput solutions with enhanced precision and low costs is necessary. This study presents CitrusGAN, a generative adversarial network-based method to reconstruct 3D citrus CT models from sparse-view X-ray images. The input X-rays are arranged in orthogonal pairs to provide additional information, and customized loss functions enable more effective learning of the mapping from 2D X-ray features to 3D CT volumes. Experimental results show that 6 views can generate high-quality citrus CT volumes, with a structural similarity index of 92.1 ​% and a peak signal-to-noise ratio of 26.374 ​dB compared with the real CT models. Moreover, the morphology of the generated model can be conveniently measured in the 3D space, facilitating the extraction of phenotypic traits including fruit length, width, height, volume, surface area, peel thickness, number of segments, and edible rate with high precision. As X-rays can be obtained using low-cost X-ray machines with high efficiency, the proposed method can be potentially developed into high-throughput equipment for fruit production lines or portable devices to realize in-field phenotyping.

果实外部和内部结构的三维分型对培育新品种具有重要意义。由于手工表型分析容易出错且耗时,因此开发具有更高精度和低成本的高通量解决方案是必要的。本研究提出了CitrusGAN,一种基于生成对抗网络的方法,用于从稀疏视图x射线图像中重建3D柑橘CT模型。输入的x射线以正交对排列,以提供额外的信息,定制的损失函数可以更有效地学习从2D x射线特征到3D CT体的映射。实验结果表明,6种视图可以生成高质量的柑橘CT体积,与真实CT模型相比,结构相似指数为92.1%,峰值信噪比为26.374 dB。此外,生成的模型形态可以方便地在三维空间中进行测量,便于高精度提取果实的长、宽、高、体积、表面积、果皮厚度、节数、可食性等表型性状。由于使用低成本、高效率的x射线机可以获得x射线,因此该方法有潜力发展成为水果生产线的高通量设备或便携式设备,以实现田间表型。
{"title":"CitrusGAN: sparse-view X-ray CT reconstruction for citrus based on generative adversarial networks.","authors":"Hansong Xiang, Zilong Xu, Yonghua Yu, Jiantan Yang, Shanjun Li, Yaohui Chen","doi":"10.1016/j.plaphe.2025.100082","DOIUrl":"10.1016/j.plaphe.2025.100082","url":null,"abstract":"<p><p>3D phenotyping of the external and internal structures is important to breed new fruit species. As manual phenotyping is error-prone and time-consuming, developing high-throughput solutions with enhanced precision and low costs is necessary. This study presents CitrusGAN, a generative adversarial network-based method to reconstruct 3D citrus CT models from sparse-view X-ray images. The input X-rays are arranged in orthogonal pairs to provide additional information, and customized loss functions enable more effective learning of the mapping from 2D X-ray features to 3D CT volumes. Experimental results show that 6 views can generate high-quality citrus CT volumes, with a structural similarity index of 92.1 ​% and a peak signal-to-noise ratio of 26.374 ​dB compared with the real CT models. Moreover, the morphology of the generated model can be conveniently measured in the 3D space, facilitating the extraction of phenotypic traits including fruit length, width, height, volume, surface area, peel thickness, number of segments, and edible rate with high precision. As X-rays can be obtained using low-cost X-ray machines with high efficiency, the proposed method can be potentially developed into high-throughput equipment for fruit production lines or portable devices to realize in-field phenotyping.</p>","PeriodicalId":20318,"journal":{"name":"Plant Phenomics","volume":"7 3","pages":"100082"},"PeriodicalIF":6.4,"publicationDate":"2025-06-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12709887/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145782533","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Plant Phenomics
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1