Pub Date : 2024-10-18DOI: 10.1186/s13007-024-01270-8
Esther Rosales Sanchez, R Jordan Price, Federico Marangelli, Kirsty McLeary, Richard J Harrison, Anindya Kundu
Background: Plant breeding played a very important role in transforming strawberries from being a niche crop with a small geographical footprint into an economically important crop grown across the planet. But even modern marker assisted breeding takes a considerable amount of time, over multiple plant generations, to produce a plant with desirable traits. As a quicker alternative, plants with desirable traits can be raised through tissue culture by doing precise genetic manipulations. Overexpression of morphogenic regulators previously known for meristem development, the transcription factors Growth-Regulating Factors (GRFs) and the GRF-Interacting Factors (GIFs), provided an efficient strategy for easier regeneration and transformation in multiple crops.
Results: We present here a comprehensive protocol for the diploid strawberry Fragaria vesca Hawaii 4 (strawberry) regeneration and transformation under control condition as compared to ectopic expression of different GRF4-GIF1 chimeras from different plant species. We report that ectopic expression of Vitis vinifera VvGRF4-GIF1 provides significantly higher regeneration efficiency during re-transformation over wild-type plants. On the other hand, deregulated expression of miRNA resistant version of VvGRF4-GIF1 or Triticum aestivum (wheat) TaGRF4-GIF1 resulted in abnormalities. Transcriptomic analysis between the different chimeric GRF4-GIF1 lines indicate that differential expression of FvExpansin might be responsible for the observed pleiotropic effects. Similarly, cytokinin dehydrogenase/oxygenase and cytokinin responsive response regulators also showed differential expression indicating GRF4-GIF1 pathway playing important role in controlling cytokinin homeostasis.
Conclusion: Our data indicate that ectopic expression of Vitis vinifera VvGRF4-GIF1 chimera can provide significant advantage over wild-type plants during strawberry regeneration without producing any pleiotropic effects seen for the miRNA resistant VvGRF4-GIF1 or TaGRF4-GIF1.
{"title":"Overexpression of Vitis GRF4-GIF1 improves regeneration efficiency in diploid Fragaria vesca Hawaii 4.","authors":"Esther Rosales Sanchez, R Jordan Price, Federico Marangelli, Kirsty McLeary, Richard J Harrison, Anindya Kundu","doi":"10.1186/s13007-024-01270-8","DOIUrl":"https://doi.org/10.1186/s13007-024-01270-8","url":null,"abstract":"<p><strong>Background: </strong>Plant breeding played a very important role in transforming strawberries from being a niche crop with a small geographical footprint into an economically important crop grown across the planet. But even modern marker assisted breeding takes a considerable amount of time, over multiple plant generations, to produce a plant with desirable traits. As a quicker alternative, plants with desirable traits can be raised through tissue culture by doing precise genetic manipulations. Overexpression of morphogenic regulators previously known for meristem development, the transcription factors Growth-Regulating Factors (GRFs) and the GRF-Interacting Factors (GIFs), provided an efficient strategy for easier regeneration and transformation in multiple crops.</p><p><strong>Results: </strong>We present here a comprehensive protocol for the diploid strawberry Fragaria vesca Hawaii 4 (strawberry) regeneration and transformation under control condition as compared to ectopic expression of different GRF4-GIF1 chimeras from different plant species. We report that ectopic expression of Vitis vinifera VvGRF4-GIF1 provides significantly higher regeneration efficiency during re-transformation over wild-type plants. On the other hand, deregulated expression of miRNA resistant version of VvGRF4-GIF1 or Triticum aestivum (wheat) TaGRF4-GIF1 resulted in abnormalities. Transcriptomic analysis between the different chimeric GRF4-GIF1 lines indicate that differential expression of FvExpansin might be responsible for the observed pleiotropic effects. Similarly, cytokinin dehydrogenase/oxygenase and cytokinin responsive response regulators also showed differential expression indicating GRF4-GIF1 pathway playing important role in controlling cytokinin homeostasis.</p><p><strong>Conclusion: </strong>Our data indicate that ectopic expression of Vitis vinifera VvGRF4-GIF1 chimera can provide significant advantage over wild-type plants during strawberry regeneration without producing any pleiotropic effects seen for the miRNA resistant VvGRF4-GIF1 or TaGRF4-GIF1.</p>","PeriodicalId":20100,"journal":{"name":"Plant Methods","volume":"20 1","pages":"160"},"PeriodicalIF":4.7,"publicationDate":"2024-10-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11488064/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142472424","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"生物学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-16DOI: 10.1186/s13007-024-01280-6
Hermawan Nugroho, Jing Xan Chew, Sivaraman Eswaran, Fei Siang Tay
This study explores the application of Artificial Intelligence (AI), specifically Convolutional Neural Networks (CNNs), for detecting rice plant diseases using ARM Cortex-M microprocessors. Given the significant role of rice as a staple food, particularly in Malaysia where the rice self-sufficiency ratio dropped from 65.2% in 2021 to 62.6% in 2022, there is a pressing need for advanced disease detection methods to enhance agricultural productivity and sustainability. The research utilizes two extensive datasets for model training and validation: the first dataset includes 5932 images across four rice disease classes, and the second comprises 10,407 images across ten classes. These datasets facilitate comprehensive disease detection analysis, leveraging MobileNetV2 and FD-MobileNet models optimized for the ARM Cortex-M4 microprocessor. The performance of these models is rigorously evaluated in terms of accuracy and computational efficiency. MobileNetV2, for instance, demonstrates a high accuracy rate of 97.5%, significantly outperforming FD-MobileNet, especially in detecting complex disease patterns such as tungro with a 93% accuracy rate. Despite FD-MobileNet's lower resource consumption, its accuracy is limited to 90% across varied testing conditions. Resource optimization strategies highlight that even slight adjustments, such as a 0.5% reduction in RAM usage and a 1.14% decrease in flash memory, can result in a notable 9% increase in validation accuracy. This underscores the critical balance between computational resource management and model performance, particularly in resource-constrained settings like those provided by microcontrollers. In summary, the deployment of CNNs on microcontrollers presents a viable solution for real-time, on-site plant disease detection, demonstrating potential improvements in detection accuracy and operational efficiency. This study advances the field of smart agriculture by integrating cutting-edge AI with practical agricultural needs, aiming to address the challenges of food security in vulnerable regions.
{"title":"Resource-optimized cnns for real-time rice disease detection with ARM cortex-M microprocessors.","authors":"Hermawan Nugroho, Jing Xan Chew, Sivaraman Eswaran, Fei Siang Tay","doi":"10.1186/s13007-024-01280-6","DOIUrl":"https://doi.org/10.1186/s13007-024-01280-6","url":null,"abstract":"<p><p>This study explores the application of Artificial Intelligence (AI), specifically Convolutional Neural Networks (CNNs), for detecting rice plant diseases using ARM Cortex-M microprocessors. Given the significant role of rice as a staple food, particularly in Malaysia where the rice self-sufficiency ratio dropped from 65.2% in 2021 to 62.6% in 2022, there is a pressing need for advanced disease detection methods to enhance agricultural productivity and sustainability. The research utilizes two extensive datasets for model training and validation: the first dataset includes 5932 images across four rice disease classes, and the second comprises 10,407 images across ten classes. These datasets facilitate comprehensive disease detection analysis, leveraging MobileNetV2 and FD-MobileNet models optimized for the ARM Cortex-M4 microprocessor. The performance of these models is rigorously evaluated in terms of accuracy and computational efficiency. MobileNetV2, for instance, demonstrates a high accuracy rate of 97.5%, significantly outperforming FD-MobileNet, especially in detecting complex disease patterns such as tungro with a 93% accuracy rate. Despite FD-MobileNet's lower resource consumption, its accuracy is limited to 90% across varied testing conditions. Resource optimization strategies highlight that even slight adjustments, such as a 0.5% reduction in RAM usage and a 1.14% decrease in flash memory, can result in a notable 9% increase in validation accuracy. This underscores the critical balance between computational resource management and model performance, particularly in resource-constrained settings like those provided by microcontrollers. In summary, the deployment of CNNs on microcontrollers presents a viable solution for real-time, on-site plant disease detection, demonstrating potential improvements in detection accuracy and operational efficiency. This study advances the field of smart agriculture by integrating cutting-edge AI with practical agricultural needs, aiming to address the challenges of food security in vulnerable regions.</p>","PeriodicalId":20100,"journal":{"name":"Plant Methods","volume":"20 1","pages":"159"},"PeriodicalIF":4.7,"publicationDate":"2024-10-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11481777/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142472425","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"生物学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-15DOI: 10.1186/s13007-024-01282-4
Jérôme Bélanger, Tanya Rose Copley, Valerio Hoyos-Villegas, Jean-Benoit Charron, Louise O'Donoughue
{"title":"Correction: A comprehensive review of in planta stable transformation strategies.","authors":"Jérôme Bélanger, Tanya Rose Copley, Valerio Hoyos-Villegas, Jean-Benoit Charron, Louise O'Donoughue","doi":"10.1186/s13007-024-01282-4","DOIUrl":"https://doi.org/10.1186/s13007-024-01282-4","url":null,"abstract":"","PeriodicalId":20100,"journal":{"name":"Plant Methods","volume":"20 1","pages":"158"},"PeriodicalIF":4.7,"publicationDate":"2024-10-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11476722/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142472422","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"生物学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-08DOI: 10.1186/s13007-024-01284-2
You Zhang, Yiyi Tu, Yijia Chen, Jialu Fang, Fan'anni Chen, Lian Liu, Xiaoman Zhang, Yuchun Wang, Wuyun Lv
The fungal pathogen Didymella segeticola causes leaf spot and leaf blight on tea plant (Camellia sinensis), leading to production losses and affecting tea quality and flavor. Accurate detection and quantification of D. segeticola growth in tea plant leaves are crucial for diagnosing disease severity or evaluating host resistance. In this study, we monitored disease progression and D. segeticola development in tea plant leaves inoculated with a GFP-expressing strain. By contrast, a DNA-based qRT-PCR analysis was employed for a more convenient and maneuverable detection of D. segeticola growth in tea leaves. This method was based on the comparison of D. segeticola-specific DNA encoding a Cys2His2-zinc-finger protein (NCBI accession number: OR987684) in relation to tea plant Cs18S rDNA1. Unlike ITS and TUB2 sequences, this specific DNA was only amplified in D. segeticola isolates, not in other tea plant pathogens. This assay is also applicable for detecting D. segeticola during interactions with various tea cultivars. Among the five cultivars tested, 'Zhongcha102' (ZC102) and 'Fuding-dabaicha' (FDDB) were more susceptible to D. segeticola compared with 'Longjing43' (LJ43), 'Zhongcha108' (ZC108), and 'Zhongcha302' (ZC302). Different D. segeticola isolates also exhibited varying levels of aggressiveness towards LJ43. In conclusion, the DNA-based qRT-PCR analysis is highly sensitive, convenient, and effective method for quantifying D. segeticola growth in tea plant. This technique can be used to diagnose the severity of tea leaf spot and blight or to evaluate tea plant resistance to this pathogen.
真菌病原体半知菌(Didymella segeticola)会导致茶树(Camellia sinensis)叶斑病和叶枯病,造成生产损失,并影响茶叶的品质和风味。准确检测和量化茶树叶片中的半知菌(D. segeticola)生长情况对于诊断病害严重程度或评估寄主抗性至关重要。在这项研究中,我们监测了接种了 GFP 表达菌株的茶树叶片的病害进展和 D. segeticola 的生长情况。相比之下,我们采用了基于 DNA 的 qRT-PCR 分析方法,以更方便、更易操作地检测茶叶中 D. segeticola 的生长情况。这种方法是通过比较一种编码 Cys2His2-锌指蛋白(NCBI登录号:OR987684)的 D. segeticola 特异性 DNA 与茶树 Cs18S rDNA1 的关系。与 ITS 和 TUB2 序列不同的是,这种特异性 DNA 只在 D. segeticola 分离物中扩增,而不在其他茶树病原体中扩增。这种检测方法也适用于检测与不同茶树品种交互作用过程中的 D. segeticola。与'龙井43'(LJ43)、'中茶108'(ZC108)和'中茶302'(ZC302)相比,'中茶102'(ZC102)和'福鼎大白茶'(FDDB)对半知菌更易感。不同的 D. segeticola 分离物对 LJ43 也表现出不同程度的侵染性。总之,基于 DNA 的 qRT-PCR 分析是一种高灵敏度、简便而有效的方法,可用于定量分析茶叶中 D. segeticola 的生长情况。该技术可用于诊断茶叶叶斑病和枯萎病的严重程度,或评估茶树对该病原体的抗性。
{"title":"Quantification of the fungal pathogen Didymella segeticola in Camellia sinensis using a DNA-based qRT-PCR assay.","authors":"You Zhang, Yiyi Tu, Yijia Chen, Jialu Fang, Fan'anni Chen, Lian Liu, Xiaoman Zhang, Yuchun Wang, Wuyun Lv","doi":"10.1186/s13007-024-01284-2","DOIUrl":"10.1186/s13007-024-01284-2","url":null,"abstract":"<p><p>The fungal pathogen Didymella segeticola causes leaf spot and leaf blight on tea plant (Camellia sinensis), leading to production losses and affecting tea quality and flavor. Accurate detection and quantification of D. segeticola growth in tea plant leaves are crucial for diagnosing disease severity or evaluating host resistance. In this study, we monitored disease progression and D. segeticola development in tea plant leaves inoculated with a GFP-expressing strain. By contrast, a DNA-based qRT-PCR analysis was employed for a more convenient and maneuverable detection of D. segeticola growth in tea leaves. This method was based on the comparison of D. segeticola-specific DNA encoding a Cys2His2-zinc-finger protein (NCBI accession number: OR987684) in relation to tea plant Cs18S rDNA1. Unlike ITS and TUB2 sequences, this specific DNA was only amplified in D. segeticola isolates, not in other tea plant pathogens. This assay is also applicable for detecting D. segeticola during interactions with various tea cultivars. Among the five cultivars tested, 'Zhongcha102' (ZC102) and 'Fuding-dabaicha' (FDDB) were more susceptible to D. segeticola compared with 'Longjing43' (LJ43), 'Zhongcha108' (ZC108), and 'Zhongcha302' (ZC302). Different D. segeticola isolates also exhibited varying levels of aggressiveness towards LJ43. In conclusion, the DNA-based qRT-PCR analysis is highly sensitive, convenient, and effective method for quantifying D. segeticola growth in tea plant. This technique can be used to diagnose the severity of tea leaf spot and blight or to evaluate tea plant resistance to this pathogen.</p>","PeriodicalId":20100,"journal":{"name":"Plant Methods","volume":"20 1","pages":"157"},"PeriodicalIF":4.7,"publicationDate":"2024-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11462658/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142392472","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"生物学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-03DOI: 10.1186/s13007-024-01273-5
Marvin Krüger, Thomas Zemanek, Dominik Wuttke, Maximilian Dinkel, Albrecht Serfling, Elias Böckmann
Background: The automation of pest monitoring is highly important for enhancing integrated pest management in practice. In this context, advanced technologies are becoming increasingly explored. Hyperspectral imaging (HSI) is a technique that has been used frequently in recent years in the context of natural science, and the successful detection of several fungal diseases and some pests has been reported. Various automated measures and image analysis methods offer great potential for enhancing monitoring in practice.
Results: In this study, the use of hyperspectral imaging over a wide spectrum from 400 to 2500 nm is investigated for noninvasive identification and the distinction of healthy plants and plants infested with Myzus persicae (Sulzer) and Frankliniella occidentalis (Pergande) on bell peppers. Pest infestations were carried out in netted areas, and images of single plants and dissected leaves were used to train the decision algorithm. Additionally, a specially modified spraying robot was converted into an autonomous platform used to carry the hyperspectral imaging system to take images under greenhouse conditions. The algorithm was developed via the XGBoost framework with gradient-boosted trees. Signals from specific wavelengths were found to be associated with the damage patterns of different insects. Under confined conditions, M. persicae and F. occidentalis infestations were distinguished from each other and from the uninfested control for single leaves. Differentiation was still possible when small whole plants were used. However, application under greenhouse conditions did not result in a good fit compared to the results of manual monitoring.
Conclusion: Hyperspectral images can be used to distinguish sucking pests on bell peppers on the basis of single leaves and intact potted bell pepper plants under controlled conditions. Wavelength reduction methods offer options for multispectral camera usage in high-grown vegetable greenhouses. The application of automated platforms similar to the one tested in this study could be possible, but for successful pest detection under greenhouse conditions, algorithms should be further developed fully considering real-world conditions.
{"title":"Hyperspectral imaging for pest symptom detection in bell pepper.","authors":"Marvin Krüger, Thomas Zemanek, Dominik Wuttke, Maximilian Dinkel, Albrecht Serfling, Elias Böckmann","doi":"10.1186/s13007-024-01273-5","DOIUrl":"10.1186/s13007-024-01273-5","url":null,"abstract":"<p><strong>Background: </strong>The automation of pest monitoring is highly important for enhancing integrated pest management in practice. In this context, advanced technologies are becoming increasingly explored. Hyperspectral imaging (HSI) is a technique that has been used frequently in recent years in the context of natural science, and the successful detection of several fungal diseases and some pests has been reported. Various automated measures and image analysis methods offer great potential for enhancing monitoring in practice.</p><p><strong>Results: </strong>In this study, the use of hyperspectral imaging over a wide spectrum from 400 to 2500 nm is investigated for noninvasive identification and the distinction of healthy plants and plants infested with Myzus persicae (Sulzer) and Frankliniella occidentalis (Pergande) on bell peppers. Pest infestations were carried out in netted areas, and images of single plants and dissected leaves were used to train the decision algorithm. Additionally, a specially modified spraying robot was converted into an autonomous platform used to carry the hyperspectral imaging system to take images under greenhouse conditions. The algorithm was developed via the XGBoost framework with gradient-boosted trees. Signals from specific wavelengths were found to be associated with the damage patterns of different insects. Under confined conditions, M. persicae and F. occidentalis infestations were distinguished from each other and from the uninfested control for single leaves. Differentiation was still possible when small whole plants were used. However, application under greenhouse conditions did not result in a good fit compared to the results of manual monitoring.</p><p><strong>Conclusion: </strong>Hyperspectral images can be used to distinguish sucking pests on bell peppers on the basis of single leaves and intact potted bell pepper plants under controlled conditions. Wavelength reduction methods offer options for multispectral camera usage in high-grown vegetable greenhouses. The application of automated platforms similar to the one tested in this study could be possible, but for successful pest detection under greenhouse conditions, algorithms should be further developed fully considering real-world conditions.</p>","PeriodicalId":20100,"journal":{"name":"Plant Methods","volume":"20 1","pages":"156"},"PeriodicalIF":4.7,"publicationDate":"2024-10-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11447932/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142366229","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"生物学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-01DOI: 10.1186/s13007-024-01279-z
Grace Handy, Imogen Carter, A Rob Mackenzie, Adriane Esquivel-Muelbert, Abraham George Smith, Daniela Yaffar, Joanne Childs, Marie Arnaud
Background: The manual study of root dynamics using images requires huge investments of time and resources and is prone to previously poorly quantified annotator bias. Artificial intelligence (AI) image-processing tools have been successful in overcoming limitations of manual annotation in homogeneous soils, but their efficiency and accuracy is yet to be widely tested on less homogenous, non-agricultural soil profiles, e.g., that of forests, from which data on root dynamics are key to understanding the carbon cycle. Here, we quantify variance in root length measured by human annotators with varying experience levels. We evaluate the application of a convolutional neural network (CNN) model, trained on a software accessible to researchers without a machine learning background, on a heterogeneous minirhizotron image dataset taken in a multispecies, mature, deciduous temperate forest.
Results: Less experienced annotators consistently identified more root length than experienced annotators. Root length annotation also varied between experienced annotators. The CNN root length results were neither precise nor accurate, taking ~ 10% of the time but significantly overestimating root length compared to expert manual annotation (p = 0.01). The CNN net root length change results were closer to manual (p = 0.08) but there remained substantial variation.
Conclusions: Manual root length annotation is contingent on the individual annotator. The only accessible CNN model cannot yet produce root data of sufficient accuracy and precision for ecological applications when applied to a complex, heterogeneous forest image dataset. A continuing evaluation and development of accessible CNNs for natural ecosystems is required.
{"title":"Variation in forest root image annotation by experts, novices, and AI.","authors":"Grace Handy, Imogen Carter, A Rob Mackenzie, Adriane Esquivel-Muelbert, Abraham George Smith, Daniela Yaffar, Joanne Childs, Marie Arnaud","doi":"10.1186/s13007-024-01279-z","DOIUrl":"10.1186/s13007-024-01279-z","url":null,"abstract":"<p><strong>Background: </strong>The manual study of root dynamics using images requires huge investments of time and resources and is prone to previously poorly quantified annotator bias. Artificial intelligence (AI) image-processing tools have been successful in overcoming limitations of manual annotation in homogeneous soils, but their efficiency and accuracy is yet to be widely tested on less homogenous, non-agricultural soil profiles, e.g., that of forests, from which data on root dynamics are key to understanding the carbon cycle. Here, we quantify variance in root length measured by human annotators with varying experience levels. We evaluate the application of a convolutional neural network (CNN) model, trained on a software accessible to researchers without a machine learning background, on a heterogeneous minirhizotron image dataset taken in a multispecies, mature, deciduous temperate forest.</p><p><strong>Results: </strong>Less experienced annotators consistently identified more root length than experienced annotators. Root length annotation also varied between experienced annotators. The CNN root length results were neither precise nor accurate, taking ~ 10% of the time but significantly overestimating root length compared to expert manual annotation (p = 0.01). The CNN net root length change results were closer to manual (p = 0.08) but there remained substantial variation.</p><p><strong>Conclusions: </strong>Manual root length annotation is contingent on the individual annotator. The only accessible CNN model cannot yet produce root data of sufficient accuracy and precision for ecological applications when applied to a complex, heterogeneous forest image dataset. A continuing evaluation and development of accessible CNNs for natural ecosystems is required.</p>","PeriodicalId":20100,"journal":{"name":"Plant Methods","volume":"20 1","pages":"154"},"PeriodicalIF":4.7,"publicationDate":"2024-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11443924/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142352021","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"生物学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Background: Crop phenotype extraction devices based on multiband narrowband spectral images can effectively detect the physiological and biochemical parameters of crops, which plays a positive role in guiding the development of precision agriculture. Although the narrowband spectral image canopy extraction method is a fundamental algorithm for the development of crop phenotype extraction devices, developing a highly real-time and embedded integrated narrowband spectral image canopy extraction method remains challenging owing to the small difference between the narrowband spectral image canopy and background.
Methods: This study identified and validated the skewed distribution of leaf color gradation in narrowband spectral images. By introducing kurtosis and skewness feature parameters, a canopy extraction method based on a superpixel skewed color gradation distribution was proposed for narrowband spectral images. In addition, different types of parameter combinations were input to construct two classifier models, and the contribution of the skewed distribution feature parameters to the proposed canopy extraction method was evaluated to confirm the effectiveness of introducing skewed leaf color skewed distribution features.
Results: Leaf color gradient skewness verification was conducted on 4200 superpixels of different sizes, and 4190 superpixels conformed to the skewness distribution. The intersection over union (IoU) between the soil background and canopy of the expanded leaf color skewed distribution feature parameters was 90.41%, whereas that of the traditional Otsu segmentation algorithm was 77.95%. The canopy extraction method used in this study performed significantly better than the traditional threshold segmentation method, using the same training set, Y1 (without skewed parameters) and Y2 (with skewed parameters) Bayesian classifier models were constructed. After evaluating the segmentation effect of introducing skewed parameters, the average classification accuracies Acc_Y1 of the Y1 model and Acc_Y2 of the Y2 model were 72.02% and 91.76%, respectively, under the same test conditions. This indicates that introducing leaf color gradient skewed parameters can significantly improve the accuracy of Bayesian classifiers for narrowband spectral images of the canopy and soil background.
Conclusions: The introduction of kurtosis and skewness as leaf color skewness feature parameters can expand the expression of leaf color information in narrowband spectral images. The narrowband spectral image canopy extraction method based on superpixel color skewness distribution features can effectively segment the canopy and soil background in narrowband spectral images, thereby providing a new solution for crop canopy phenotype feature extraction.
{"title":"Study on canopy extraction method for narrowband spectral images based on superpixel color gradation skewness distribution features.","authors":"Hongfeng Yu, Yongqian Ding, Pei Zhang, Furui Zhang, Xianglin Dou, Zhengmeng Chen","doi":"10.1186/s13007-024-01281-5","DOIUrl":"10.1186/s13007-024-01281-5","url":null,"abstract":"<p><strong>Background: </strong>Crop phenotype extraction devices based on multiband narrowband spectral images can effectively detect the physiological and biochemical parameters of crops, which plays a positive role in guiding the development of precision agriculture. Although the narrowband spectral image canopy extraction method is a fundamental algorithm for the development of crop phenotype extraction devices, developing a highly real-time and embedded integrated narrowband spectral image canopy extraction method remains challenging owing to the small difference between the narrowband spectral image canopy and background.</p><p><strong>Methods: </strong>This study identified and validated the skewed distribution of leaf color gradation in narrowband spectral images. By introducing kurtosis and skewness feature parameters, a canopy extraction method based on a superpixel skewed color gradation distribution was proposed for narrowband spectral images. In addition, different types of parameter combinations were input to construct two classifier models, and the contribution of the skewed distribution feature parameters to the proposed canopy extraction method was evaluated to confirm the effectiveness of introducing skewed leaf color skewed distribution features.</p><p><strong>Results: </strong>Leaf color gradient skewness verification was conducted on 4200 superpixels of different sizes, and 4190 superpixels conformed to the skewness distribution. The intersection over union (IoU) between the soil background and canopy of the expanded leaf color skewed distribution feature parameters was 90.41%, whereas that of the traditional Otsu segmentation algorithm was 77.95%. The canopy extraction method used in this study performed significantly better than the traditional threshold segmentation method, using the same training set, Y1 (without skewed parameters) and Y2 (with skewed parameters) Bayesian classifier models were constructed. After evaluating the segmentation effect of introducing skewed parameters, the average classification accuracies Acc_Y1 of the Y1 model and Acc_Y2 of the Y2 model were 72.02% and 91.76%, respectively, under the same test conditions. This indicates that introducing leaf color gradient skewed parameters can significantly improve the accuracy of Bayesian classifiers for narrowband spectral images of the canopy and soil background.</p><p><strong>Conclusions: </strong>The introduction of kurtosis and skewness as leaf color skewness feature parameters can expand the expression of leaf color information in narrowband spectral images. The narrowband spectral image canopy extraction method based on superpixel color skewness distribution features can effectively segment the canopy and soil background in narrowband spectral images, thereby providing a new solution for crop canopy phenotype feature extraction.</p>","PeriodicalId":20100,"journal":{"name":"Plant Methods","volume":"20 1","pages":"155"},"PeriodicalIF":4.7,"publicationDate":"2024-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11446045/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142361915","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"生物学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-09-30DOI: 10.1186/s13007-024-01264-6
Joseph Carter, Joshua Hoffman, Braxton Fjeldsted, Grant Ogilvie, Douglas D Cook
Maize is the most grown feed crop in the United States. Due to wind storms and other factors, 5% of maize falls over annually. The longitudinal shear modulus of maize stalk tissues is currently unreported and may have a significant influence on stalk failure. To better understand the causes of this phenomenon, maize stalk material properties need to be measured so that they can be used as material constants in computational models that provide detailed analysis of maize stalk failure. This study reports longitudinal shear modulus of maize stalk tissue through repeated torsion testing of dry and fully mature maize stalks. Measurements were focused on the two tissues found in maize stalks: the hard outer rind and the soft inner pith. Uncertainty analysis and comparison of multiple methodologies indicated that all measurements are subject to low error and bias. The results of this study will allow researchers to better understand maize stalk failure modes through computational modeling. This will allow researchers to prevent annual maize loss through later studies. This study also provides a methodology that could be used or adapted in the measurement of tissues from other plants such as sorghum, sugarcane, etc.
{"title":"Measurement of maize stalk shear moduli.","authors":"Joseph Carter, Joshua Hoffman, Braxton Fjeldsted, Grant Ogilvie, Douglas D Cook","doi":"10.1186/s13007-024-01264-6","DOIUrl":"10.1186/s13007-024-01264-6","url":null,"abstract":"<p><p>Maize is the most grown feed crop in the United States. Due to wind storms and other factors, 5% of maize falls over annually. The longitudinal shear modulus of maize stalk tissues is currently unreported and may have a significant influence on stalk failure. To better understand the causes of this phenomenon, maize stalk material properties need to be measured so that they can be used as material constants in computational models that provide detailed analysis of maize stalk failure. This study reports longitudinal shear modulus of maize stalk tissue through repeated torsion testing of dry and fully mature maize stalks. Measurements were focused on the two tissues found in maize stalks: the hard outer rind and the soft inner pith. Uncertainty analysis and comparison of multiple methodologies indicated that all measurements are subject to low error and bias. The results of this study will allow researchers to better understand maize stalk failure modes through computational modeling. This will allow researchers to prevent annual maize loss through later studies. This study also provides a methodology that could be used or adapted in the measurement of tissues from other plants such as sorghum, sugarcane, etc.</p>","PeriodicalId":20100,"journal":{"name":"Plant Methods","volume":"20 1","pages":"152"},"PeriodicalIF":4.7,"publicationDate":"2024-09-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11441149/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142352019","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"生物学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-09-30DOI: 10.1186/s13007-024-01277-1
Yang Zhou, Honghao Zhou, Yue Chen
Aiming at the problems of low efficiency and high cost in determining the phenotypic parameters of Cymbidium seedlings by artificial approaches, this study proposed a fully automated measurement scheme for some phenotypic parameters based on point cloud. The key point or difficulty is to design a segmentation method for individual tillers according to the morphology-specific structure. After determining the branch points, two rounds of segmentation schemes were designed. The non-overlapping part of each tiller and the overlapping parts of each ramet are separated in the first round based on the edge point cloud-based segmentation, while in the second round, the overlapping part was sliced along the horizontal direction according to the weight ratio of the tillers above, to obtain the complete point cloud of all tillers. The core superiority of the algorithm is that the segmentation fits the tiller growth direction well, and the extracted skeleton points of tillers are close to the actual growth direction, significantly improving the prediction accuracy of the subsequent phenotypic parameters. Five phenotypic parameters, plant height, leaf number, leaf length, leaf width and leaf area, were automatically calculated. Through experiments, the accuracy of the five parameters reached 98.6%, 100%, 92.2%, 89.1%, and 82.3%, respectively, which reach the needs of various phenotypic applications.
{"title":"An automated phenotyping method for Chinese Cymbidium seedlings based on 3D point cloud.","authors":"Yang Zhou, Honghao Zhou, Yue Chen","doi":"10.1186/s13007-024-01277-1","DOIUrl":"10.1186/s13007-024-01277-1","url":null,"abstract":"<p><p>Aiming at the problems of low efficiency and high cost in determining the phenotypic parameters of Cymbidium seedlings by artificial approaches, this study proposed a fully automated measurement scheme for some phenotypic parameters based on point cloud. The key point or difficulty is to design a segmentation method for individual tillers according to the morphology-specific structure. After determining the branch points, two rounds of segmentation schemes were designed. The non-overlapping part of each tiller and the overlapping parts of each ramet are separated in the first round based on the edge point cloud-based segmentation, while in the second round, the overlapping part was sliced along the horizontal direction according to the weight ratio of the tillers above, to obtain the complete point cloud of all tillers. The core superiority of the algorithm is that the segmentation fits the tiller growth direction well, and the extracted skeleton points of tillers are close to the actual growth direction, significantly improving the prediction accuracy of the subsequent phenotypic parameters. Five phenotypic parameters, plant height, leaf number, leaf length, leaf width and leaf area, were automatically calculated. Through experiments, the accuracy of the five parameters reached 98.6%, 100%, 92.2%, 89.1%, and 82.3%, respectively, which reach the needs of various phenotypic applications.</p>","PeriodicalId":20100,"journal":{"name":"Plant Methods","volume":"20 1","pages":"151"},"PeriodicalIF":4.7,"publicationDate":"2024-09-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11441005/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142352015","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"生物学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-09-30DOI: 10.1186/s13007-024-01278-0
Yucheng Cai, Yan Li, Xuerui Qi, Jianqing Zhao, Li Jiang, Yongchao Tian, Yan Zhu, Weixing Cao, Xiaohu Zhang
Accurate monitoring of wheat phenological stages is essential for effective crop management and informed agricultural decision-making. Traditional methods often rely on labour-intensive field surveys, which are prone to subjective bias and limited temporal resolution. To address these challenges, this study explores the potential of near-surface cameras combined with an advanced deep-learning approach to derive wheat phenological stages from high-quality, real-time RGB image series. Three deep learning models based on three different spatiotemporal feature fusion methods, namely sequential fusion, synchronous fusion, and parallel fusion, were constructed and evaluated for deriving wheat phenological stages with these near-surface RGB image series. Moreover, the impact of different image resolutions, capture perspectives, and model training strategies on the performance of deep learning models was also investigated. The results indicate that the model using the sequential fusion method is optimal, with an overall accuracy (OA) of 0.935, a mean absolute error (MAE) of 0.069, F1-score (F1) of 0.936, and kappa coefficients (Kappa) of 0.924 in wheat phenological stages. Besides, the enhanced image resolution of 512 × 512 pixels and a suitable image capture perspective, specifically a sensor viewing angle of 40° to 60° vertically, introduce more effective features for phenological stage detection, thereby enhancing the model's accuracy. Furthermore, concerning the model training, applying a two-step fine-tuning strategy will also enhance the model's robustness to random variations in perspective. This research introduces an innovative approach for real-time phenological stage detection and provides a solid foundation for precision agriculture. By accurately deriving critical phenological stages, the methodology developed in this study supports the optimization of crop management practices, which may result in improved resource efficiency and sustainability across diverse agricultural settings. The implications of this work extend beyond wheat, offering a scalable solution that can be adapted to monitor other crops, thereby contributing to more efficient and sustainable agricultural systems.
{"title":"A deep learning approach for deriving wheat phenology from near-surface RGB image series using spatiotemporal fusion.","authors":"Yucheng Cai, Yan Li, Xuerui Qi, Jianqing Zhao, Li Jiang, Yongchao Tian, Yan Zhu, Weixing Cao, Xiaohu Zhang","doi":"10.1186/s13007-024-01278-0","DOIUrl":"10.1186/s13007-024-01278-0","url":null,"abstract":"<p><p>Accurate monitoring of wheat phenological stages is essential for effective crop management and informed agricultural decision-making. Traditional methods often rely on labour-intensive field surveys, which are prone to subjective bias and limited temporal resolution. To address these challenges, this study explores the potential of near-surface cameras combined with an advanced deep-learning approach to derive wheat phenological stages from high-quality, real-time RGB image series. Three deep learning models based on three different spatiotemporal feature fusion methods, namely sequential fusion, synchronous fusion, and parallel fusion, were constructed and evaluated for deriving wheat phenological stages with these near-surface RGB image series. Moreover, the impact of different image resolutions, capture perspectives, and model training strategies on the performance of deep learning models was also investigated. The results indicate that the model using the sequential fusion method is optimal, with an overall accuracy (OA) of 0.935, a mean absolute error (MAE) of 0.069, F1-score (F1) of 0.936, and kappa coefficients (Kappa) of 0.924 in wheat phenological stages. Besides, the enhanced image resolution of 512 × 512 pixels and a suitable image capture perspective, specifically a sensor viewing angle of 40° to 60° vertically, introduce more effective features for phenological stage detection, thereby enhancing the model's accuracy. Furthermore, concerning the model training, applying a two-step fine-tuning strategy will also enhance the model's robustness to random variations in perspective. This research introduces an innovative approach for real-time phenological stage detection and provides a solid foundation for precision agriculture. By accurately deriving critical phenological stages, the methodology developed in this study supports the optimization of crop management practices, which may result in improved resource efficiency and sustainability across diverse agricultural settings. The implications of this work extend beyond wheat, offering a scalable solution that can be adapted to monitor other crops, thereby contributing to more efficient and sustainable agricultural systems.</p>","PeriodicalId":20100,"journal":{"name":"Plant Methods","volume":"20 1","pages":"153"},"PeriodicalIF":4.7,"publicationDate":"2024-09-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11443927/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142352013","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"生物学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}