Pub Date : 2025-09-08eCollection Date: 2025-09-01DOI: 10.1016/j.plaphe.2025.100103
Jinfeng Zhao, Zeyu Hou, Hua Hua, Qianlong Nie, Yuqian Pang, Yan Ma, Xuehui Huang
Radicle length is a critical indicator of seed vigor, germination capacity, and seedling growth potential. However, existing measurement methods face challenges in automation, efficiency, and generalizability, often requiring manual intervention or re-annotation for different seed types. To address these limitations, this paper proposes an automated method, LenRuler, with a primary focus on rice seeds and validation in multiple crops. The method leverages the Segment Anything Model (SAM) as the foundational segmentation model and employs a coarse-to-fine segmentation strategy combined with Gaussian-based classification to automatically generate bounding boxes and centroids, which are then fed into SAM for precise segmentation of the seed coat and radicle. The radicle length is subsequently computed by converting the geodesic distance between the radicle skeleton's farthest endpoint and its nearest intersection with the seed coat skeleton into the true length. Experiments on the Riceseed1 dataset show that the proposed method achieves a Dice coefficient of 0.955 and a Pixel Accuracy of 0.944, demonstrating excellent segmentation performance. Radicle length measurement experiments on the Riceseed2 test set show that the Mean Absolute Error (MAE) was 0.273 mm and the coefficient of determination (R2) was 0.982, confirming the method's high precision for rice. On the Otherseed dataset, the predicted radicle lengths for maize (Zea mays), pearl millet (Pennisetum glaucum), and rye (Secale cereale) are consistent with the observed radicle length distributions, demonstrating strong cross-species performance. These results establish LenRuler as an accurate and automated solution for radicle length measurement in rice, with validated applicability to other crop species.
{"title":"LenRuler: a rice-centric method for automated radicle length measurement with multicrop validation.","authors":"Jinfeng Zhao, Zeyu Hou, Hua Hua, Qianlong Nie, Yuqian Pang, Yan Ma, Xuehui Huang","doi":"10.1016/j.plaphe.2025.100103","DOIUrl":"10.1016/j.plaphe.2025.100103","url":null,"abstract":"<p><p>Radicle length is a critical indicator of seed vigor, germination capacity, and seedling growth potential. However, existing measurement methods face challenges in automation, efficiency, and generalizability, often requiring manual intervention or re-annotation for different seed types. To address these limitations, this paper proposes an automated method, LenRuler, with a primary focus on rice seeds and validation in multiple crops. The method leverages the Segment Anything Model (SAM) as the foundational segmentation model and employs a coarse-to-fine segmentation strategy combined with Gaussian-based classification to automatically generate bounding boxes and centroids, which are then fed into SAM for precise segmentation of the seed coat and radicle. The radicle length is subsequently computed by converting the geodesic distance between the radicle skeleton's farthest endpoint and its nearest intersection with the seed coat skeleton into the true length. Experiments on the Riceseed1 dataset show that the proposed method achieves a Dice coefficient of 0.955 and a Pixel Accuracy of 0.944, demonstrating excellent segmentation performance. Radicle length measurement experiments on the Riceseed2 test set show that the Mean Absolute Error (MAE) was 0.273 mm and the coefficient of determination (R<sup>2</sup>) was 0.982, confirming the method's high precision for rice. On the Otherseed dataset, the predicted radicle lengths for maize (<i>Zea mays</i>), pearl millet (<i>Pennisetum glaucum</i>), and rye (<i>Secale cereale</i>) are consistent with the observed radicle length distributions, demonstrating strong cross-species performance. These results establish LenRuler as an accurate and automated solution for radicle length measurement in rice, with validated applicability to other crop species.</p>","PeriodicalId":20318,"journal":{"name":"Plant Phenomics","volume":"7 3","pages":"100103"},"PeriodicalIF":6.4,"publicationDate":"2025-09-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12710053/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145782257","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The vertical distribution of leaves plays a crucial role in the growth process of maize. Understanding the vertical spectral characteristics of maize leaves is crucial for monitoring their growth. However, accurate estimation of the vertical distribution of leaf area remains a significant challenge in practical investigations. To address this, we used a 3D RTM to simulate the layered canopy spectra of maize, revealing the impact of canopy structure on remote sensing penetration depth across different growth stages and planting densities. The results of this study revealed differences in detection depth across growth stages. During the early growth stage, the depth was concentrated in the bottom 1 to 3 leaves of the canopy, reaching 1 to 4 leaves at the ear stage and 1 to 7 leaves during the grain-filling stage. The planting density had a notable effect on the detection depth at the bottom of the canopy. Moreover, compared with the other spectral bands, the near-infrared spectral range exhibited greater sensitivity to density variations. In terms of LAI inversion, a FuseBell-Hybrid model was constructed. We analyzed VIs across different planting density and canopy structural scenarios and found that compared with lower layers, increased density reduced the relative change rate in the upper leaf layers. The sensitivity patterns differed between plant architectures: VIred exhibited density-dependent sensitivity, with distinct responses between plant types, and MTVI2 demonstrated optimal performance for mid-canopy monitoring. This study highlights the influence of the heterogeneous structural characteristics of maize canopies on remote sensing detection depth during different phenological stages, providing theoretical support for enhancing multilayer crop monitoring in precision agriculture.
{"title":"Exploring the depth of the maize canopy LAI detected by spectroscopy based on simulations and in situ measurements.","authors":"Jinpeng Cheng, Jiao Wang, Dan Zhao, Fenghui Duan, Qiang Wu, Yongliang Lai, Jianbo Qi, Shuping Xiong, Hongbo Qiao, Xinming Ma, Hao Yang, Guijun Yang","doi":"10.1016/j.plaphe.2025.100100","DOIUrl":"10.1016/j.plaphe.2025.100100","url":null,"abstract":"<p><p>The vertical distribution of leaves plays a crucial role in the growth process of maize. Understanding the vertical spectral characteristics of maize leaves is crucial for monitoring their growth. However, accurate estimation of the vertical distribution of leaf area remains a significant challenge in practical investigations. To address this, we used a 3D RTM to simulate the layered canopy spectra of maize, revealing the impact of canopy structure on remote sensing penetration depth across different growth stages and planting densities. The results of this study revealed differences in detection depth across growth stages. During the early growth stage, the depth was concentrated in the bottom 1 to 3 leaves of the canopy, reaching 1 to 4 leaves at the ear stage and 1 to 7 leaves during the grain-filling stage. The planting density had a notable effect on the detection depth at the bottom of the canopy. Moreover, compared with the other spectral bands, the near-infrared spectral range exhibited greater sensitivity to density variations. In terms of LAI inversion, a FuseBell-Hybrid model was constructed. We analyzed VIs across different planting density and canopy structural scenarios and found that compared with lower layers, increased density reduced the relative change rate in the upper leaf layers. The sensitivity patterns differed between plant architectures: VIred exhibited density-dependent sensitivity, with distinct responses between plant types, and MTVI2 demonstrated optimal performance for mid-canopy monitoring. This study highlights the influence of the heterogeneous structural characteristics of maize canopies on remote sensing detection depth during different phenological stages, providing theoretical support for enhancing multilayer crop monitoring in precision agriculture.</p>","PeriodicalId":20318,"journal":{"name":"Plant Phenomics","volume":"7 3","pages":"100100"},"PeriodicalIF":6.4,"publicationDate":"2025-09-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12709897/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145782537","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-09-07eCollection Date: 2025-09-01DOI: 10.1016/j.plaphe.2025.100102
Hakyung Kwon, Suk-Ha Lee, Moon Young Kim, Jungmin Ha
Deep understanding of slow-wilting is essential for developing drought-tolerant crops. Existing approaches to measure transpiration rates are difficult to apply to large populations due to their high cost and low throughput. To overcome these challenges, we developed a high-throughput phenotyping system that integrates a load cell sensor and an Arduino-based microcontroller device. The system tracked the transpiration rate in real time by measuring changes in the pot weight in 224 recombinant inbred lines of Taekwangkong (fast-wilting) x SS2-2 (slow-wilting) under water-restricted conditions. Among five transpiration features we determined, stress recognition time point (SRTP) and decrease in transpiration rate by stress (DTrs) are informative parameters, that are interconnected and independently affect slow-wilting as well. Quantitative trait loci (QTL) for SRTP and DTrs were identified at the same location as the major QTL for slow wilting, qSW_Gm10, identified in the previous study. Notably, we found a novel major QTL for DTrs, qDTrs_Gm04, with a LOD value of 42 and PVE of 47 %. As a candidate gene for qDTrs_Gm04, GmWRKY58 was selected with differential expression between the parental lines under drought conditions as well as upstream sequence variation. Our high-throughput system is of help not only to biological research but breeding programs of drought-tolerant lines.
{"title":"Development of an automated phenotyping platform and identification of a novel QTL for drought tolerance in soybean.","authors":"Hakyung Kwon, Suk-Ha Lee, Moon Young Kim, Jungmin Ha","doi":"10.1016/j.plaphe.2025.100102","DOIUrl":"10.1016/j.plaphe.2025.100102","url":null,"abstract":"<p><p>Deep understanding of slow-wilting is essential for developing drought-tolerant crops. Existing approaches to measure transpiration rates are difficult to apply to large populations due to their high cost and low throughput. To overcome these challenges, we developed a high-throughput phenotyping system that integrates a load cell sensor and an Arduino-based microcontroller device. The system tracked the transpiration rate in real time by measuring changes in the pot weight in 224 recombinant inbred lines of Taekwangkong (fast-wilting) x SS2-2 (slow-wilting) under water-restricted conditions. Among five transpiration features we determined, stress recognition time point (SRTP) and decrease in transpiration rate by stress (DTrs) are informative parameters, that are interconnected and independently affect slow-wilting as well. Quantitative trait loci (QTL) for SRTP and DTrs were identified at the same location as the major QTL for slow wilting, <i>qSW_Gm10</i>, identified in the previous study. Notably, we found a novel major QTL for DTrs, <i>qDTrs_Gm04</i>, with a LOD value of 42 and PVE of 47 %. As a candidate gene for <i>qDTrs_Gm04</i>, <i>GmWRKY58</i> was selected with differential expression between the parental lines under drought conditions as well as upstream sequence variation. Our high-throughput system is of help not only to biological research but breeding programs of drought-tolerant lines.</p>","PeriodicalId":20318,"journal":{"name":"Plant Phenomics","volume":"7 3","pages":"100102"},"PeriodicalIF":6.4,"publicationDate":"2025-09-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12709953/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145782489","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-09-05eCollection Date: 2025-09-01DOI: 10.1016/j.plaphe.2025.100101
Xuqi Lu, Jiayang Xie, Jiayou Yan, Ji Zhou, Haiyan Cen
Color accuracy and consistency in remote sensing imagery are crucial for reliable plant health monitoring, precise growth stage identification, and stress detection. However, without effective color correction, variations in lighting and sensor sensitivity often cause color distortions between images, compromising data quality and analysis. This study introduces a novel in-flight color correction approach for RGB imagery using cooperative dual unmanned aerial vehicle (UAV) flights integrated with a color chart (CoF-CC). The method employs a master UAV equipped with an RGB camera for image acquisition and a synchronized secondary UAV carrying a ColorChecker (X-Rite) chart, ensuring persistent visibility of the chart within the imaging field of the master UAV for the calculation of a color correction matrix (CCM) for in-flight image correction. Field experiments validated the method by analyzing cross-sensor color consistency, assessing color measurement accuracy on field-grown rice leaves, and demonstrating its practical applications using rice maturity estimation as an example. The results indicated that the CCM significantly enhanced color accuracy, with a 66.1 % reduction in the average CIE 2000 color difference (ΔE), and improved color consistency among the six RGB sensors, with a 70.2 % increase in the intracluster distance. CoF-CC subsequently reduced ΔE from 18.2 to 5.0 between the corrected rice leaf color and ground-truth measurements, indicating that the color differences were nearly perceptible to the human eye. Moreover, the corrected imagery significantly enhanced the rice maturity prediction accuracy, improving the R2 from 0.28 to 0.67. In summary, the CoF-CC method standardizes RGB images across diverse lighting conditions and sensors, demonstrating robust performance in color analysis and interpretation under open-field conditions.
{"title":"Precise Image Color Correction Based on Dual Unmanned Aerial Vehicle Cooperative Flight.","authors":"Xuqi Lu, Jiayang Xie, Jiayou Yan, Ji Zhou, Haiyan Cen","doi":"10.1016/j.plaphe.2025.100101","DOIUrl":"10.1016/j.plaphe.2025.100101","url":null,"abstract":"<p><p>Color accuracy and consistency in remote sensing imagery are crucial for reliable plant health monitoring, precise growth stage identification, and stress detection. However, without effective color correction, variations in lighting and sensor sensitivity often cause color distortions between images, compromising data quality and analysis. This study introduces a novel in-flight color correction approach for RGB imagery using cooperative dual unmanned aerial vehicle (UAV) flights integrated with a color chart (CoF-CC). The method employs a master UAV equipped with an RGB camera for image acquisition and a synchronized secondary UAV carrying a ColorChecker (X-Rite) chart, ensuring persistent visibility of the chart within the imaging field of the master UAV for the calculation of a color correction matrix (CCM) for in-flight image correction. Field experiments validated the method by analyzing cross-sensor color consistency, assessing color measurement accuracy on field-grown rice leaves, and demonstrating its practical applications using rice maturity estimation as an example. The results indicated that the CCM significantly enhanced color accuracy, with a 66.1 % reduction in the average CIE 2000 color difference (ΔE), and improved color consistency among the six RGB sensors, with a 70.2 % increase in the intracluster distance. CoF-CC subsequently reduced ΔE from 18.2 to 5.0 between the corrected rice leaf color and ground-truth measurements, indicating that the color differences were nearly perceptible to the human eye. Moreover, the corrected imagery significantly enhanced the rice maturity prediction accuracy, improving the R<sup>2</sup> from 0.28 to 0.67. In summary, the CoF-CC method standardizes RGB images across diverse lighting conditions and sensors, demonstrating robust performance in color analysis and interpretation under open-field conditions.</p>","PeriodicalId":20318,"journal":{"name":"Plant Phenomics","volume":"7 3","pages":"100101"},"PeriodicalIF":6.4,"publicationDate":"2025-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12710031/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145782357","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The development of computer vision-based rice phenotyping techniques is crucial for precision field management and accelerated breeding, which facilitate continuously advancing rice production. Among phenotyping tasks, distinguishing image components is a key prerequisite for characterizing plant growth and development at the organ scale, enabling deeper insights into ecophysiological processes. However, owing to the fine structure of rice organs and complex illumination within the canopy, this task remains highly challenging, underscoring the need for a high-quality training dataset. Such datasets are scarce, both because of a lack of large, representative collections of rice field images and because of the time-intensive nature of the annotation. To address this gap, we created the first comprehensive multiclass rice semantic segmentation dataset, RiceSEG. We gathered nearly 50,000 high-resolution, ground-based images from five major rice-growing countries (China, Japan, India, the Philippines, and Tanzania), encompassing more than 6000 genotypes across all growth stages. From these original images, 3078 representative samples were selected and annotated with six classes (background, green vegetation, senescent vegetation, panicle, weeds, and duckweed) to form the RiceSEG dataset. Notably, the subdataset from China spans all major genotypes and rice-growing environments from northeastern to southern regions. Both state-of-the-art convolutional neural networks and transformer-based semantic segmentation models were used as baselines. While these models perform reasonably well in segmenting background and green vegetation, they face difficulties during the reproductive stage, when canopy structures are more complex and when multiple classes are involved. These findings highlight the importance of our dataset for developing specialized segmentation models for rice and other crops. The RiceSEG dataset is publicly available at www.global-rice.com.
{"title":"Global rice multiclass segmentation dataset (RiceSEG): comprehensive and diverse high-resolution RGB-annotated images for the development and benchmarking of rice segmentation algorithms.","authors":"Junchi Zhou, Haozhou Wang, Yoichiro Kato, Tejasri Nampally, P Rajalakshmi, M Balram, Keisuke Katsura, Hao Lu, Yue Mu, Wanneng Yang, Yangmingrui Gao, Feng Xiao, Hongtao Chen, Yuhao Chen, Wenjuan Li, Jingwen Wang, Fenghua Yu, Jian Zhou, Wensheng Wang, Xiaochun Hu, Yuanzhu Yang, Yanfeng Ding, Wei Guo, Shouyang Liu","doi":"10.1016/j.plaphe.2025.100099","DOIUrl":"10.1016/j.plaphe.2025.100099","url":null,"abstract":"<p><p>The development of computer vision-based rice phenotyping techniques is crucial for precision field management and accelerated breeding, which facilitate continuously advancing rice production. Among phenotyping tasks, distinguishing image components is a key prerequisite for characterizing plant growth and development at the organ scale, enabling deeper insights into ecophysiological processes. However, owing to the fine structure of rice organs and complex illumination within the canopy, this task remains highly challenging, underscoring the need for a high-quality training dataset. Such datasets are scarce, both because of a lack of large, representative collections of rice field images and because of the time-intensive nature of the annotation. To address this gap, we created the first comprehensive multiclass rice semantic segmentation dataset, RiceSEG. We gathered nearly 50,000 high-resolution, ground-based images from five major rice-growing countries (China, Japan, India, the Philippines, and Tanzania), encompassing more than 6000 genotypes across all growth stages. From these original images, 3078 representative samples were selected and annotated with six classes (background, green vegetation, senescent vegetation, panicle, weeds, and duckweed) to form the RiceSEG dataset. Notably, the subdataset from China spans all major genotypes and rice-growing environments from northeastern to southern regions. Both state-of-the-art convolutional neural networks and transformer-based semantic segmentation models were used as baselines. While these models perform reasonably well in segmenting background and green vegetation, they face difficulties during the reproductive stage, when canopy structures are more complex and when multiple classes are involved. These findings highlight the importance of our dataset for developing specialized segmentation models for rice and other crops. The RiceSEG dataset is publicly available at www.global-rice.com.</p>","PeriodicalId":20318,"journal":{"name":"Plant Phenomics","volume":"7 3","pages":"100099"},"PeriodicalIF":6.4,"publicationDate":"2025-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12710049/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145781897","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-08-21eCollection Date: 2025-09-01DOI: 10.1016/j.plaphe.2025.100093
Lele Yan, Guoxiong Zhou, Miying Yan, Xiangjun Wang
Rubber products have become an important strategic resource in the global economy. However, individual rubber tree segmentation in plantation environments remains challenging due to canopy background interference and significant morphological variations among trees. To address these issues, we propose a high-precision segmentation network,TM-WSNet (Spatial Geometry Enhanced Hybrid Feature Extraction Module-Wavelet Grid Feature Fusion Encoder Segmentation Network). First, we introduce SGTramba, a hybrid feature extraction module combining Grouped Transformer and Mamba architectures, designed to reduce confusion between tree crown boundaries and surrounding vegetation or background elements. Second, we propose the WGMS encoder, which enhances structural feature recognition by applying wavelet-based spatial grid downsampling and multiscale feature fusion, effectively handling variations in canopy shape and tree height. Third, a scale optimization algorithm (SCPO) is developed to adaptively search for the optimal learning rate, addressing uneven learning across different resolution scales. We evaluate TM-WSNet on a self-constructed dataset (RubberTree) and two public datasets (ShapeNetPart and ForestSemantic), where it consistently achieves high segmentation accuracy and robustness. In practical field tests, our method accurately predicts key rubber tree parameters-height, crown width, and diameter at breast height with coefficients of determination (R2) of 1.00, 0.99, and 0.89, respectively. These results demonstrate TM-WSNet's strong potential for supporting precision rubber yield estimation and health monitoring in complex plantation environments.
{"title":"TM-WSNet: A precise segmentation method for individual rubber trees based on UAV LiDAR point cloud.","authors":"Lele Yan, Guoxiong Zhou, Miying Yan, Xiangjun Wang","doi":"10.1016/j.plaphe.2025.100093","DOIUrl":"10.1016/j.plaphe.2025.100093","url":null,"abstract":"<p><p>Rubber products have become an important strategic resource in the global economy. However, individual rubber tree segmentation in plantation environments remains challenging due to canopy background interference and significant morphological variations among trees. To address these issues, we propose a high-precision segmentation network,TM-WSNet (Spatial Geometry Enhanced Hybrid Feature Extraction Module-Wavelet Grid Feature Fusion Encoder Segmentation Network). First, we introduce SGTramba, a hybrid feature extraction module combining Grouped Transformer and Mamba architectures, designed to reduce confusion between tree crown boundaries and surrounding vegetation or background elements. Second, we propose the WGMS encoder, which enhances structural feature recognition by applying wavelet-based spatial grid downsampling and multiscale feature fusion, effectively handling variations in canopy shape and tree height. Third, a scale optimization algorithm (SCPO) is developed to adaptively search for the optimal learning rate, addressing uneven learning across different resolution scales. We evaluate TM-WSNet on a self-constructed dataset (RubberTree) and two public datasets (ShapeNetPart and ForestSemantic), where it consistently achieves high segmentation accuracy and robustness. In practical field tests, our method accurately predicts key rubber tree parameters-height, crown width, and diameter at breast height with coefficients of determination (R<sup>2</sup>) of 1.00, 0.99, and 0.89, respectively. These results demonstrate TM-WSNet's strong potential for supporting precision rubber yield estimation and health monitoring in complex plantation environments.</p>","PeriodicalId":20318,"journal":{"name":"Plant Phenomics","volume":"7 3","pages":"100093"},"PeriodicalIF":6.4,"publicationDate":"2025-08-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12709891/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145782525","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-08-21eCollection Date: 2025-09-01DOI: 10.1016/j.plaphe.2025.100097
Sungjay Kim, Xianghui Xin, Sang-Yeon Kim, Gyumin Kim, Min-Gyu Baek, Do Yeon Won, Chang Hyeon Baek, Ghiseok Kim
Accurate fruit size estimation is crucial for plant phenotyping, as it enables precise crop management and enhances agricultural productivity by providing essential data for growth and resource efficiency analysis. In this study, we estimated the size of on-plant oriental melons grown in a vertical cultivation system to address the challenges posed by leaf occlusion. Data augmentation was achieved using a diffusion model to generate synthetic leaves to cover existing fruits and create an enriched dataset. Three instance segmentation models-mask region-based convolutional neural network (CNN), Mask2Former, and detection transformer (DETR)-and six de-occlusion models derived from these architectures were implemented. These models successfully inferred both visible and occluded areas of the fruit. Notably, Amodal Mask2Former and occlusion-aware RCNN (ORCNN) achieved average precision scores of 85.92 % and 85.35 %, respectively. The inferred masks were used to estimate the height and diameter of the fruit, with Amodal Mask2Former yielding a mean absolute error of 5.46 mm and 4.20 mm and a mean absolute percentage error of 4.86 % and 5.33 %, respectively. The results indicate enhanced performance of the transformer-based Amodal Mask2Former over CNN architectures in de-occlusion tasks and size estimation. Finally, the enhancement in de-occlusion models compared to conventional models was assessed and demonstrated across occlusion ratios ranging from 0 to 70 %. However, generating synthetic datasets with occlusion ratios over 70 % remains a limitation.
{"title":"De-occlusion models and diffusion-based data augmentation for size estimation of on-plant oriental melons.","authors":"Sungjay Kim, Xianghui Xin, Sang-Yeon Kim, Gyumin Kim, Min-Gyu Baek, Do Yeon Won, Chang Hyeon Baek, Ghiseok Kim","doi":"10.1016/j.plaphe.2025.100097","DOIUrl":"10.1016/j.plaphe.2025.100097","url":null,"abstract":"<p><p>Accurate fruit size estimation is crucial for plant phenotyping, as it enables precise crop management and enhances agricultural productivity by providing essential data for growth and resource efficiency analysis. In this study, we estimated the size of on-plant oriental melons grown in a vertical cultivation system to address the challenges posed by leaf occlusion. Data augmentation was achieved using a diffusion model to generate synthetic leaves to cover existing fruits and create an enriched dataset. Three instance segmentation models-mask region-based convolutional neural network (CNN), Mask2Former, and detection transformer (DETR)-and six de-occlusion models derived from these architectures were implemented. These models successfully inferred both visible and occluded areas of the fruit. Notably, Amodal Mask2Former and occlusion-aware RCNN (ORCNN) achieved average precision scores of 85.92 % and 85.35 %, respectively. The inferred masks were used to estimate the height and diameter of the fruit, with Amodal Mask2Former yielding a mean absolute error of 5.46 mm and 4.20 mm and a mean absolute percentage error of 4.86 % and 5.33 %, respectively. The results indicate enhanced performance of the transformer-based Amodal Mask2Former over CNN architectures in de-occlusion tasks and size estimation. Finally, the enhancement in de-occlusion models compared to conventional models was assessed and demonstrated across occlusion ratios ranging from 0 to 70 %. However, generating synthetic datasets with occlusion ratios over 70 % remains a limitation.</p>","PeriodicalId":20318,"journal":{"name":"Plant Phenomics","volume":"7 3","pages":"100097"},"PeriodicalIF":6.4,"publicationDate":"2025-08-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12709959/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145782535","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-08-20eCollection Date: 2025-09-01DOI: 10.1016/j.plaphe.2025.100098
Sang Hyo Cheong, Sung Jae Lee, Su Jin Im, Juwon Seo, Kang Ryoung Park
Weed management plays a crucial role in increasing crop yields. Semantic segmentation, which classifies each pixel in an image captured by a camera into categories such as crops, weeds, and background, is a widely used method in this context. However, conventional semantic segmentation methods rely solely on pixel information within the camera's field of view (FOV), hindering their ability to detect weeds outside the visible area. This limitation can lead to incomplete weed removal and inefficient herbicide application. Incorporating information beyond the FOV in crop and weed segmentation is therefore essential for effective herbicide usage. Nevertheless, existing research on crop and weed segmentation has largely overlooked this limitation. To address this issue, we propose the knowledge distillation-based outpainting and semantic segmentation network (KDOSS-Net) for crop and weed images, a novel framework that enhances segmentation accuracy by leveraging information beyond the FOV. KDOSS-Net consists of two parts: the object prediction-guided outpainting and semantic segmentation network (OPOSS-Net), which serves as the teacher model by restoring areas outside the FOV and performing semantic segmentation, and the semantic segmentation without outpainting network (SSWO-Net), which serves as the student model, directly performing segmentation without outpainting. Through knowledge distillation (KD), the student model learns from the teacher's outputs, which results in a lightweight yet highly accurate segmentation network that is suitable for deployment on agricultural robots with limited computing power. Experiments on three public datasets-Rice seedling and weed, CWFID, and BoniRob-yielded mean intersection over union (mIOU) scores of 0.6315, 0.7101, and 0.7524, respectively. These results demonstrate that KDOSS-Net achieves higher accuracy than existing state-of-the-art (SOTA) segmentation models while significantly reducing computational overhead. Furthermore, the weed information extracted using our method is automatically linked as input to the open-source large language and vision assistant (LLaVA), enabling the development of a system that recommends optimal herbicide strategies tailored to the detected weed class.
{"title":"KDOSS-net: Knowledge distillation-based outpainting and semantic segmentation network for crop and weed images.","authors":"Sang Hyo Cheong, Sung Jae Lee, Su Jin Im, Juwon Seo, Kang Ryoung Park","doi":"10.1016/j.plaphe.2025.100098","DOIUrl":"10.1016/j.plaphe.2025.100098","url":null,"abstract":"<p><p>Weed management plays a crucial role in increasing crop yields. Semantic segmentation, which classifies each pixel in an image captured by a camera into categories such as crops, weeds, and background, is a widely used method in this context. However, conventional semantic segmentation methods rely solely on pixel information within the camera's field of view (FOV), hindering their ability to detect weeds outside the visible area. This limitation can lead to incomplete weed removal and inefficient herbicide application. Incorporating information beyond the FOV in crop and weed segmentation is therefore essential for effective herbicide usage. Nevertheless, existing research on crop and weed segmentation has largely overlooked this limitation. To address this issue, we propose the knowledge distillation-based outpainting and semantic segmentation network (KDOSS-Net) for crop and weed images, a novel framework that enhances segmentation accuracy by leveraging information beyond the FOV. KDOSS-Net consists of two parts: the object prediction-guided outpainting and semantic segmentation network (OPOSS-Net), which serves as the teacher model by restoring areas outside the FOV and performing semantic segmentation, and the semantic segmentation without outpainting network (SSWO-Net), which serves as the student model, directly performing segmentation without outpainting. Through knowledge distillation (KD), the student model learns from the teacher's outputs, which results in a lightweight yet highly accurate segmentation network that is suitable for deployment on agricultural robots with limited computing power. Experiments on three public datasets-Rice seedling and weed, CWFID, and BoniRob-yielded mean intersection over union (<i>mIOU</i>) scores of 0.6315, 0.7101, and 0.7524, respectively. These results demonstrate that KDOSS-Net achieves higher accuracy than existing state-of-the-art (SOTA) segmentation models while significantly reducing computational overhead. Furthermore, the weed information extracted using our method is automatically linked as input to the open-source large language and vision assistant (LLaVA), enabling the development of a system that recommends optimal herbicide strategies tailored to the detected weed class.</p>","PeriodicalId":20318,"journal":{"name":"Plant Phenomics","volume":"7 3","pages":"100098"},"PeriodicalIF":6.4,"publicationDate":"2025-08-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12710004/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145781953","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-08-14eCollection Date: 2025-09-01DOI: 10.1016/j.plaphe.2025.100096
Fujun Sun, Shusong Zheng, Zongyang Li, Qi Gao, Ni Jiang
Wheat spike morphology plays a critical role in determining grain yield and has garnered significant interest in genetics and breeding research. However, traditional measurement methods are limited to simple traits and fail to capture complex spike phenotypes with high precision, thus limiting progress in yield-related trait analysis. In this study, a deep learning pipeline, called Speakerphone, for acquiring precise wheat spike phenotypes was developed. Our pipeline achieved a mean intersection over union (mIoU) of 0.948 in spike segmentation. Additionally, the spike traits measured by our method strongly agreed with the manually measured values, with Pearson correlation coefficients of 0.9865 for spike length, 0.9753 for the number of spikelets per spike, and 0.9635 for fertile spikelets. Using experimental data of 221 wheat cultivars from various regions of Zhao County, Hebei Province, China, our pipeline extracted 45 phenotypes and analyzed their correlations with thousand-grain weight (TGW) and spike yield. Our findings indicate that precise measurements of spike area, spikelet area, and other phenotypic traits clarify the correlation between spike morphology and wheat yield. Through hierarchical clustering on the basis of spike morphology, we categorized wheat spikes into six classes and identified the phenotypic differences among these classes and their effects on TGW and yield. Furthermore, phenotypic differences among wheat cultivars from different geographical regions and over decades were revealed in this study, with an increase in the number of large-spike cultivars over time, especially in southern China. This research may help breeders understand the relationship between wheat spike morphology and yield, thus providing an important basis for future wheat breeding efforts.
{"title":"Analysis of Wheat Spike Morphological Traits by 2D Imaging.","authors":"Fujun Sun, Shusong Zheng, Zongyang Li, Qi Gao, Ni Jiang","doi":"10.1016/j.plaphe.2025.100096","DOIUrl":"10.1016/j.plaphe.2025.100096","url":null,"abstract":"<p><p>Wheat spike morphology plays a critical role in determining grain yield and has garnered significant interest in genetics and breeding research. However, traditional measurement methods are limited to simple traits and fail to capture complex spike phenotypes with high precision, thus limiting progress in yield-related trait analysis. In this study, a deep learning pipeline, called Speakerphone, for acquiring precise wheat spike phenotypes was developed. Our pipeline achieved a mean intersection over union (mIoU) of 0.948 in spike segmentation. Additionally, the spike traits measured by our method strongly agreed with the manually measured values, with Pearson correlation coefficients of 0.9865 for spike length, 0.9753 for the number of spikelets per spike, and 0.9635 for fertile spikelets. Using experimental data of 221 wheat cultivars from various regions of Zhao County, Hebei Province, China, our pipeline extracted 45 phenotypes and analyzed their correlations with thousand-grain weight (TGW) and spike yield. Our findings indicate that precise measurements of spike area, spikelet area, and other phenotypic traits clarify the correlation between spike morphology and wheat yield. Through hierarchical clustering on the basis of spike morphology, we categorized wheat spikes into six classes and identified the phenotypic differences among these classes and their effects on TGW and yield. Furthermore, phenotypic differences among wheat cultivars from different geographical regions and over decades were revealed in this study, with an increase in the number of large-spike cultivars over time, especially in southern China. This research may help breeders understand the relationship between wheat spike morphology and yield, thus providing an important basis for future wheat breeding efforts.</p>","PeriodicalId":20318,"journal":{"name":"Plant Phenomics","volume":"7 3","pages":"100096"},"PeriodicalIF":6.4,"publicationDate":"2025-08-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12709963/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145782429","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-08-14eCollection Date: 2025-09-01DOI: 10.1016/j.plaphe.2025.100095
Yu-Jin Jeon, Seungpyo Hong, Taek Sung Lee, Soo Hyun Park, Giha Song, Myeong-Gyun Seo, Jiwoo Lee, Yoonseo Lim, Jeong-Tak An, Sehee Lee, Ho-Young Jeong, Soon Ju Park, Chanhui Lee, Dae-Hyun Jung, Choon-Tak Kwon
Global climate change and urbanization have posed challenges to sustainable food production and resource management in agriculture. Vertical farming, in particular, allows for high-density cultivation on limited land but requires precise control of crop height to suit vertical farming systems. Tomato, a globally significant vegetable crop, urgently requires mutant varieties that suppress indeterminate growth for effective cultivation in vertical farming systems. In this study, we utilized the CRISPR-Cas9 system to develop a new tomato cultivar optimized for vertical farming by editing the Gibberellin 20-oxidase (SlGA20ox) genes, which are well known for their roles in the "Green Revolution". Additionally, we proposed a volumetric model to effectively identify mutants through non-destructive analysis of chlorophyll fluorescence. The proposed model achieved over 84 % classification accuracy in distinguishing triple-determinate and slga20ox gene-edited plants, outperforming traditional machine learning methods and 1D-CNN approaches. Unlike previous studies that primarily relied on manual feature extraction from chlorophyll fluorescence data, this research introduced a deep learning framework capable of automating feature extraction in three dimensions while learning the temporal characteristics of chlorophyll fluorescence imaging data. The study demonstrated the potential to classify tomato plants customized for vertical farming, leveraging advanced phenotypic analysis methods. Our approach explores new analytical methods for chlorophyll fluorescence imaging data within AI-based phenotyping and can be extended to other crops and traits, accelerating breeding programs and enhancing the efficiency of genetic resource management.
{"title":"Volumetric Deep Learning-Based Precision Phenotyping of Gene-Edited Tomato for Vertical Farming.","authors":"Yu-Jin Jeon, Seungpyo Hong, Taek Sung Lee, Soo Hyun Park, Giha Song, Myeong-Gyun Seo, Jiwoo Lee, Yoonseo Lim, Jeong-Tak An, Sehee Lee, Ho-Young Jeong, Soon Ju Park, Chanhui Lee, Dae-Hyun Jung, Choon-Tak Kwon","doi":"10.1016/j.plaphe.2025.100095","DOIUrl":"10.1016/j.plaphe.2025.100095","url":null,"abstract":"<p><p>Global climate change and urbanization have posed challenges to sustainable food production and resource management in agriculture. Vertical farming, in particular, allows for high-density cultivation on limited land but requires precise control of crop height to suit vertical farming systems. Tomato, a globally significant vegetable crop, urgently requires mutant varieties that suppress indeterminate growth for effective cultivation in vertical farming systems. In this study, we utilized the CRISPR-Cas9 system to develop a new tomato cultivar optimized for vertical farming by editing the <i>Gibberellin 20-oxidase</i> (<i>SlGA20ox</i>) genes, which are well known for their roles in the \"Green Revolution\". Additionally, we proposed a volumetric model to effectively identify mutants through non-destructive analysis of chlorophyll fluorescence. The proposed model achieved over 84 % classification accuracy in distinguishing triple-determinate and <i>slga20ox</i> gene-edited plants, outperforming traditional machine learning methods and 1D-CNN approaches. Unlike previous studies that primarily relied on manual feature extraction from chlorophyll fluorescence data, this research introduced a deep learning framework capable of automating feature extraction in three dimensions while learning the temporal characteristics of chlorophyll fluorescence imaging data. The study demonstrated the potential to classify tomato plants customized for vertical farming, leveraging advanced phenotypic analysis methods. Our approach explores new analytical methods for chlorophyll fluorescence imaging data within AI-based phenotyping and can be extended to other crops and traits, accelerating breeding programs and enhancing the efficiency of genetic resource management.</p>","PeriodicalId":20318,"journal":{"name":"Plant Phenomics","volume":"7 3","pages":"100095"},"PeriodicalIF":6.4,"publicationDate":"2025-08-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12710025/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145782451","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}