首页 > 最新文献

The Plant Phenome Journal最新文献

英文 中文
Data driven discovery and quantification of hyperspectral leaf reflectance phenotypes across a maize diversity panel 数据驱动发现和量化玉米多样性面板中的高光谱叶片反射率表型
Pub Date : 2024-06-06 DOI: 10.1002/ppj2.20106
Michael C. Tross, Marcin W. Grzybowski, T. Jubery, Ryleigh J. Grove, Aime Nishimwe, J. V. Torres-Rodríguez, Guangchao Sun, B. Ganapathysubramanian, Yufeng Ge, James c. Schnable
Estimates of plant traits derived from hyperspectral reflectance data have the potential to efficiently substitute for traits, which are time or labor intensive to manually score. Typical workflows for estimating plant traits from hyperspectral reflectance data employ supervised classification models that can require substantial ground truth datasets for training. We explore the potential of an unsupervised approach, autoencoders, to extract meaningful traits from plant hyperspectral reflectance data using measurements of the reflectance of 2151 individual wavelengths of light from the leaves of maize (Zea mays) plants harvested from 1658 field plots in a replicated field trial. A subset of autoencoder‐derived variables exhibited significant repeatability, indicating that a substantial proportion of the total variance in these variables was explained by difference between maize genotypes, while other autoencoder variables appear to capture variation resulting from changes in leaf reflectance between different batches of data collection. Several of the repeatable latent variables were significantly correlated with other traits scored from the same maize field experiment, including one autoencoder‐derived latent variable (LV8) that predicted plant chlorophyll content modestly better than a supervised model trained on the same data. In at least one case, genome‐wide association study hits for variation in autoencoder‐derived variables were proximal to genes with known or plausible links to leaf phenotypes expected to alter hyperspectral reflectance. In aggregate, these results suggest that an unsupervised, autoencoder‐based approach can identify meaningful and genetically controlled variation in high‐dimensional, high‐throughput phenotyping data and link identified variables back to known plant traits of interest.
从高光谱反射数据中得出的植物性状估计值有可能有效替代人工评分耗时或耗力的性状。从高光谱反射数据估算植物性状的典型工作流程采用的是监督分类模型,需要大量的地面实况数据集进行训练。我们利用在重复田间试验中从 1658 块田地收获的玉米(Zea mays)植株叶片上测量的 2151 个单独波长光的反射率,探索了从植物高光谱反射率数据中提取有意义性状的无监督方法(自动编码器)的潜力。自编码器衍生变量的一个子集表现出显著的可重复性,表明这些变量总变异的很大一部分是由玉米基因型之间的差异解释的,而其他自编码器变量似乎捕捉到了不同数据收集批次之间叶片反射率变化所导致的变异。几个可重复的潜变量与同一玉米田间试验中的其他性状显著相关,包括一个自编码器衍生的潜变量(LV8),它对植物叶绿素含量的预测略优于根据相同数据训练的监督模型。至少在一种情况下,全基因组关联研究发现,自动编码器衍生变量的变异与已知或可能与预期改变高光谱反射率的叶片表型有关的基因很接近。总之,这些结果表明,基于无监督自动编码器的方法可以在高维、高通量表型数据中识别有意义的基因控制变异,并将识别出的变量与已知的植物相关性状联系起来。
{"title":"Data driven discovery and quantification of hyperspectral leaf reflectance phenotypes across a maize diversity panel","authors":"Michael C. Tross, Marcin W. Grzybowski, T. Jubery, Ryleigh J. Grove, Aime Nishimwe, J. V. Torres-Rodríguez, Guangchao Sun, B. Ganapathysubramanian, Yufeng Ge, James c. Schnable","doi":"10.1002/ppj2.20106","DOIUrl":"https://doi.org/10.1002/ppj2.20106","url":null,"abstract":"Estimates of plant traits derived from hyperspectral reflectance data have the potential to efficiently substitute for traits, which are time or labor intensive to manually score. Typical workflows for estimating plant traits from hyperspectral reflectance data employ supervised classification models that can require substantial ground truth datasets for training. We explore the potential of an unsupervised approach, autoencoders, to extract meaningful traits from plant hyperspectral reflectance data using measurements of the reflectance of 2151 individual wavelengths of light from the leaves of maize (Zea mays) plants harvested from 1658 field plots in a replicated field trial. A subset of autoencoder‐derived variables exhibited significant repeatability, indicating that a substantial proportion of the total variance in these variables was explained by difference between maize genotypes, while other autoencoder variables appear to capture variation resulting from changes in leaf reflectance between different batches of data collection. Several of the repeatable latent variables were significantly correlated with other traits scored from the same maize field experiment, including one autoencoder‐derived latent variable (LV8) that predicted plant chlorophyll content modestly better than a supervised model trained on the same data. In at least one case, genome‐wide association study hits for variation in autoencoder‐derived variables were proximal to genes with known or plausible links to leaf phenotypes expected to alter hyperspectral reflectance. In aggregate, these results suggest that an unsupervised, autoencoder‐based approach can identify meaningful and genetically controlled variation in high‐dimensional, high‐throughput phenotyping data and link identified variables back to known plant traits of interest.","PeriodicalId":504448,"journal":{"name":"The Plant Phenome Journal","volume":"46 3","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141377270","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Estimating Fusarium head blight severity in winter wheat using deep learning and a spectral index 利用深度学习和光谱指数估算冬小麦镰刀菌头孢疫病的严重程度
Pub Date : 2024-05-22 DOI: 10.1002/ppj2.20103
Riley McConachie, Connor Belot, Mitra Serajazari, Helen Booker, John Sulik
Fusarium head blight (FHB) of wheat (Triticum aestivum L.), caused by the fungal pathogen Fusarium graminearum (Fg), reduces grain yield and quality due to the production of the mycotoxin deoxynivalenol. Manual rating for incidence (percent of infected wheat heads/spikes) and severity (percent of spikelets infected) to estimate FHB resistance is time‐consuming and subject to human error. This study uses a deep learning model, combined with a spectral index, to provide rapid phenotyping of FHB severity. An object detection model was used to localize wheat heads within boundary boxes. Corresponding boxes were used to prompt Meta's Segment Anything Model to segment wheat heads. Using 2576 images of wheat heads point inoculated with Fg in a controlled environment, a spectral index was developed using the red and green bands to differentiate healthy from infected tissue and estimate disease severity. Stratified random sampling was applied to pixels within the segmentation mask, and the model classified pixels as healthy or infected with an accuracy of 87.8%. Linear regression determined the relationship between the index and visual severity scores. The severity estimated by the index was able to predict visual scores (R2 = 0.83, p = < 2e‐16). This workflow was also applied to plot size images of infected wheat heads from an outside dataset with varying cultivars and lighting to assess model transferability. It correctly classified pixels as healthy or infected with a prediction accuracy of 85.8%. These methods may provide rapid estimation of FHB severity to improve selection efficiency for resistance or estimate disease pressure for effective management.
由真菌病原体禾谷镰刀菌(Fg)引起的小麦(Triticum aestivum L.)镰刀菌头枯病(FHB)会产生霉菌毒素脱氧雪腐镰刀菌烯醇,从而降低谷物产量和品质。人工评定发病率(受感染的小麦头/穗百分比)和严重程度(受感染的小穗百分比)以估算 FHB 抗性既费时又容易出现人为错误。本研究使用深度学习模型,结合光谱指数,对 FHB 严重程度进行快速表型。使用对象检测模型将小麦头定位在边界框内。相应的方框用于提示 Meta 的 "分段任何模型 "对小麦头进行分段。在受控环境中,使用 2576 张小麦头点接种 Fg 的图像,利用红色和绿色波段开发了光谱指数,以区分健康组织和感染组织,并估计疾病严重程度。对分割掩膜内的像素进行分层随机抽样,该模型将像素划分为健康或感染,准确率为 87.8%。线性回归确定了该指数与视觉严重程度评分之间的关系。该指数估计的严重程度能够预测视觉评分(R2 = 0.83,p = < 2e-16)。为了评估模型的可移植性,我们还将该工作流程应用于外部数据集中不同栽培品种和光照下受感染小麦头的地块大小图像。它能正确地将像素划分为健康或感染,预测准确率为 85.8%。这些方法可以快速估算 FHB 的严重程度,从而提高抗性选择效率或估算病害压力以进行有效管理。
{"title":"Estimating Fusarium head blight severity in winter wheat using deep learning and a spectral index","authors":"Riley McConachie, Connor Belot, Mitra Serajazari, Helen Booker, John Sulik","doi":"10.1002/ppj2.20103","DOIUrl":"https://doi.org/10.1002/ppj2.20103","url":null,"abstract":"Fusarium head blight (FHB) of wheat (Triticum aestivum L.), caused by the fungal pathogen Fusarium graminearum (Fg), reduces grain yield and quality due to the production of the mycotoxin deoxynivalenol. Manual rating for incidence (percent of infected wheat heads/spikes) and severity (percent of spikelets infected) to estimate FHB resistance is time‐consuming and subject to human error. This study uses a deep learning model, combined with a spectral index, to provide rapid phenotyping of FHB severity. An object detection model was used to localize wheat heads within boundary boxes. Corresponding boxes were used to prompt Meta's Segment Anything Model to segment wheat heads. Using 2576 images of wheat heads point inoculated with Fg in a controlled environment, a spectral index was developed using the red and green bands to differentiate healthy from infected tissue and estimate disease severity. Stratified random sampling was applied to pixels within the segmentation mask, and the model classified pixels as healthy or infected with an accuracy of 87.8%. Linear regression determined the relationship between the index and visual severity scores. The severity estimated by the index was able to predict visual scores (R2 = 0.83, p = < 2e‐16). This workflow was also applied to plot size images of infected wheat heads from an outside dataset with varying cultivars and lighting to assess model transferability. It correctly classified pixels as healthy or infected with a prediction accuracy of 85.8%. These methods may provide rapid estimation of FHB severity to improve selection efficiency for resistance or estimate disease pressure for effective management.","PeriodicalId":504448,"journal":{"name":"The Plant Phenome Journal","volume":"52 9","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-05-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141112680","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Zero‐shot insect detection via weak language supervision 通过弱语言监督进行零次昆虫探测
Pub Date : 2024-05-21 DOI: 10.1002/ppj2.20107
Ben Feuer, Ameya Joshi, Minsu Cho, Kewal Jani, Shivani Chiranjeevi, Ziwei Deng, Aditya Balu, Ashutosh Kumar Singh, S. Sarkar, Nirav C. Merchant, Arti Singh, B. Ganapathysubramanian, C. Hegde
Cheap and ubiquitous sensing has made collecting large agricultural datasets relatively straightforward. These large datasets (for instance, citizen science data curation platforms like iNaturalist) can pave the way for developing powerful artificial intelligence (AI) models for detection and counting. However, traditional supervised learning methods require labeled data, and manual annotation of these raw datasets with useful labels (such as bounding boxes or segmentation masks) can be extremely laborious, expensive, and error‐prone. In this paper, we demonstrate the power of zero‐shot computer vision methods—a new family of approaches that require (almost) no manual supervision—for plant phenomics applications. Focusing on insect detection as the primary use case, we show that our models enable highly accurate detection of insects in a variety of challenging imaging environments. Our technical contributions are two‐fold: (a) We curate the Insecta rank class of iNaturalist to form a new benchmark dataset of approximately 6 million images consisting of 2526 agriculturally and ecologically important species, including pests and beneficial insects. (b) Using a vision‐language object detection method coupled with weak language supervision, we are able to automatically annotate images in this dataset with bounding box information localizing the insect within each image. Our method succeeds in detecting diverse insect species present in a wide variety of backgrounds, producing high‐quality bounding boxes in a zero‐shot manner with no additional training cost. This open dataset can serve as a use‐inspired benchmark for the AI community. We demonstrate that our method can also be used for other applications in plant phenomics, such as fruit detection in images of strawberry and apple trees. Overall, our framework highlights the promise of zero‐shot approaches to make high‐throughput plant phenotyping more affordable.
廉价且无处不在的传感技术使得收集大型农业数据集变得相对简单。这些大型数据集(例如 iNaturalist 等公民科学数据整理平台)可以为开发强大的人工智能(AI)检测和计数模型铺平道路。然而,传统的监督学习方法需要标注数据,而为这些原始数据集手动标注有用的标签(如边界框或分割掩码)可能极其费力、昂贵且容易出错。在本文中,我们展示了零镜头计算机视觉方法--无需(几乎)人工监督的新方法系列--在植物表型组学应用中的威力。我们以昆虫检测为主要应用案例,展示了我们的模型能够在各种具有挑战性的成像环境中高度准确地检测昆虫。我们的技术贡献有两方面:(a) 我们对 iNaturalist 的昆虫等级类别进行了整理,形成了一个新的基准数据集,该数据集包含约 600 万张图像,其中有 2526 种具有重要农业和生态意义的物种,包括害虫和益虫。(b) 利用视觉语言对象检测方法和弱语言监督,我们能够为该数据集中的图像自动注释边界框信息,定位每张图像中的昆虫。我们的方法能成功地检测出各种背景下的不同昆虫种类,并以零拍摄的方式生成高质量的边界框,无需额外的训练成本。这一开放式数据集可作为人工智能界的使用启发基准。我们证明,我们的方法还可用于植物表型组学的其他应用,如草莓和苹果树图像中的果实检测。总之,我们的框架凸显了零镜头方法的前景,使高通量植物表型分析更加经济实惠。
{"title":"Zero‐shot insect detection via weak language supervision","authors":"Ben Feuer, Ameya Joshi, Minsu Cho, Kewal Jani, Shivani Chiranjeevi, Ziwei Deng, Aditya Balu, Ashutosh Kumar Singh, S. Sarkar, Nirav C. Merchant, Arti Singh, B. Ganapathysubramanian, C. Hegde","doi":"10.1002/ppj2.20107","DOIUrl":"https://doi.org/10.1002/ppj2.20107","url":null,"abstract":"Cheap and ubiquitous sensing has made collecting large agricultural datasets relatively straightforward. These large datasets (for instance, citizen science data curation platforms like iNaturalist) can pave the way for developing powerful artificial intelligence (AI) models for detection and counting. However, traditional supervised learning methods require labeled data, and manual annotation of these raw datasets with useful labels (such as bounding boxes or segmentation masks) can be extremely laborious, expensive, and error‐prone. In this paper, we demonstrate the power of zero‐shot computer vision methods—a new family of approaches that require (almost) no manual supervision—for plant phenomics applications. Focusing on insect detection as the primary use case, we show that our models enable highly accurate detection of insects in a variety of challenging imaging environments. Our technical contributions are two‐fold: (a) We curate the Insecta rank class of iNaturalist to form a new benchmark dataset of approximately 6 million images consisting of 2526 agriculturally and ecologically important species, including pests and beneficial insects. (b) Using a vision‐language object detection method coupled with weak language supervision, we are able to automatically annotate images in this dataset with bounding box information localizing the insect within each image. Our method succeeds in detecting diverse insect species present in a wide variety of backgrounds, producing high‐quality bounding boxes in a zero‐shot manner with no additional training cost. This open dataset can serve as a use‐inspired benchmark for the AI community. We demonstrate that our method can also be used for other applications in plant phenomics, such as fruit detection in images of strawberry and apple trees. Overall, our framework highlights the promise of zero‐shot approaches to make high‐throughput plant phenotyping more affordable.","PeriodicalId":504448,"journal":{"name":"The Plant Phenome Journal","volume":"140 34","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-05-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141114354","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Erratum to: Estimation of the nutritive value of grasslands with the Yara N‐sensor field spectrometer 勘误:利用雅苒 N 传感器现场光谱仪估算草地的营养价值
Pub Date : 2024-03-16 DOI: 10.1002/ppj2.20091
{"title":"Erratum to: Estimation of the nutritive value of grasslands with the Yara N‐sensor field spectrometer","authors":"","doi":"10.1002/ppj2.20091","DOIUrl":"https://doi.org/10.1002/ppj2.20091","url":null,"abstract":"","PeriodicalId":504448,"journal":{"name":"The Plant Phenome Journal","volume":"53 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-03-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140235950","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The Height Pole: Measuring plot height using a single‐point LiDAR sensor 高度杆使用单点激光雷达传感器测量地块高度
Pub Date : 2024-03-08 DOI: 10.1002/ppj2.20097
Malcolm J. Morrison, A. Gahagan, T. Hotte, Hannah E. Morrison, Matthew Kenny, A. Saumure, Marc B. Lefevbre
Plant canopy height is an essential trait for phenomics and plant breeding. Despite its importance, height is still largely measured by manual means with a ruler and notepad. Here, we present the Height Pole, a novel single‐point LiDAR (SPL)‐based instrument to measure and record plant and canopy height in the field quickly, reliably, and accurately. An SPL was mounted on the top of a pole and aimed downwards at an adjustable paddle that was positioned at the desired height. A custom app, written for Android OS, saved the plant height data from the SPL to a tablet. The Height Pole was tested against a ruler in the lab, in a field trial setting, and by multiple operators. Indoor and outdoor testing found no significant differences between a ruler and the Height Pole measurements. A test with five operators revealed that measuring, recording, transcribing, and digitizing were on average 20 s per plot slower with a ruler than with the Height Pole. The Height Pole required only one operator to measure and record data, reduced operator fatigue, and by directly writing the data to a .CSV file eliminated transcription errors. These improvements make it easier to collect crop height data on large experiments rapidly and accurately with low input costs.
植物冠层高度是表型组学和植物育种的基本性状。尽管植物冠层高度非常重要,但目前仍主要使用尺子和记事本进行人工测量。在这里,我们介绍一种基于单点激光雷达(SPL)的新型仪器--高度杆,用于在田间快速、可靠、准确地测量和记录植物和冠层高度。SPL 安装在杆顶,向下瞄准一个可调节的桨叶,桨叶被定位在所需的高度。为安卓操作系统编写的定制应用程序可将 SPL 上的植株高度数据保存到平板电脑上。高度杆在实验室、田间试验环境和多名操作员的操作下与标尺进行了对比测试。室内和室外测试发现,直尺和高度杆的测量结果没有明显差异。由五名操作员进行的测试表明,使用直尺测量、记录、转录和数字化每个地块平均比使用高度尺慢 20 秒。高度杆只需要一名操作员来测量和记录数据,减轻了操作员的疲劳,而且通过直接将数据写入 .CSV 文件,消除了转录错误。这些改进使得在大型试验中以较低的投入成本快速、准确地收集作物高度数据变得更加容易。
{"title":"The Height Pole: Measuring plot height using a single‐point LiDAR sensor","authors":"Malcolm J. Morrison, A. Gahagan, T. Hotte, Hannah E. Morrison, Matthew Kenny, A. Saumure, Marc B. Lefevbre","doi":"10.1002/ppj2.20097","DOIUrl":"https://doi.org/10.1002/ppj2.20097","url":null,"abstract":"Plant canopy height is an essential trait for phenomics and plant breeding. Despite its importance, height is still largely measured by manual means with a ruler and notepad. Here, we present the Height Pole, a novel single‐point LiDAR (SPL)‐based instrument to measure and record plant and canopy height in the field quickly, reliably, and accurately. An SPL was mounted on the top of a pole and aimed downwards at an adjustable paddle that was positioned at the desired height. A custom app, written for Android OS, saved the plant height data from the SPL to a tablet. The Height Pole was tested against a ruler in the lab, in a field trial setting, and by multiple operators. Indoor and outdoor testing found no significant differences between a ruler and the Height Pole measurements. A test with five operators revealed that measuring, recording, transcribing, and digitizing were on average 20 s per plot slower with a ruler than with the Height Pole. The Height Pole required only one operator to measure and record data, reduced operator fatigue, and by directly writing the data to a .CSV file eliminated transcription errors. These improvements make it easier to collect crop height data on large experiments rapidly and accurately with low input costs.","PeriodicalId":504448,"journal":{"name":"The Plant Phenome Journal","volume":"19 9","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-03-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140257835","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Adoption of unoccupied aerial systems in agricultural research 在农业研究中采用无人驾驶航空系统
Pub Date : 2024-03-08 DOI: 10.1002/ppj2.20098
Jennifer Lachowiec, Max J. Feldman, Filipe Inacio Matias, D. LeBauer, Alexander Gregory
A comprehensive survey and subject‐expert interviews conducted among agricultural researchers investigated perceived value and barriers to the adoption of unoccupied aerial systems (UASs) in agricultural research. These systems are often referred to colloquially as drones and are composed of unoccupied/uncrewed/unmanned vehicles and incorporated sensors. This study of UASs involved 154 respondents from 21 countries representing various agricultural sectors. The survey identified three key applications considered most promising for UASs in agriculture: precision agriculture, crop phenotyping/plant breeding, and crop modeling. Over 80% of respondents rated UASs for phenotyping as valuable, with 47.6% considering them very valuable. Among the participants, 41% were already using UAS technology in their research, while 49% expressed interest in future adoption. Current users highly valued UASs for phenotyping, with 63.9% considering them very valuable, compared to 39.4% of potential future users. The study also explored barriers to UAS adoption. The most commonly reported barriers were the “High cost of instruments/devices or software” (46.0%) and the “Lack of knowledge or trained personnel to analyze data” (40.9%). These barriers persisted as top concerns for both current and potential future users. Respondents expressed a desire for detailed step‐by‐step protocols for drone data processing pipelines (34.7%) and in‐person training for personnel (16.5%) as valuable resources for UAS adoption. The research sheds light on the prevailing perceptions and challenges associated with UAS usage in agricultural research, emphasizing the potential of UASs in specific applications and identifying crucial barriers to address for wider adoption in the agricultural sector.
在农业研究人员中开展的一项综合调查和主题专家访谈调查了在农业研究中采用无人驾驶航空系统(UAS)的认知价值和障碍。这些系统通常被俗称为无人机,由无人/无人驾驶/无人飞行器和内置传感器组成。这项关于无人机系统的研究涉及来自 21 个国家不同农业部门的 154 名受访者。调查确定了无人机系统在农业领域最有前景的三大应用:精准农业、作物表型/植物育种和作物建模。超过 80% 的受访者认为用于表型分析的无人机系统很有价值,47.6% 的受访者认为非常有价值。在参与者中,41%的人已经在研究中使用了无人机系统技术,49%的人表示有兴趣在未来采用该技术。目前的用户高度评价用于表型分析的无人机系统,63.9%的用户认为其非常有价值,而未来潜在用户的这一比例为39.4%。研究还探讨了采用无人机系统的障碍。报告最多的障碍是 "仪器/设备或软件成本高"(46.0%)和 "缺乏分析数据的知识或训练有素的人员"(40.9%)。这些障碍一直是当前和未来潜在用户最关心的问题。受访者表示,希望获得无人机数据处理管道的详细分步协议(34.7%)和人员现场培训(16.5%),作为采用无人机系统的宝贵资源。这项研究揭示了与农业研究中使用无人机系统相关的普遍看法和挑战,强调了无人机系统在特定应用中的潜力,并确定了农业部门广泛采用无人机系统需要解决的关键障碍。
{"title":"Adoption of unoccupied aerial systems in agricultural research","authors":"Jennifer Lachowiec, Max J. Feldman, Filipe Inacio Matias, D. LeBauer, Alexander Gregory","doi":"10.1002/ppj2.20098","DOIUrl":"https://doi.org/10.1002/ppj2.20098","url":null,"abstract":"A comprehensive survey and subject‐expert interviews conducted among agricultural researchers investigated perceived value and barriers to the adoption of unoccupied aerial systems (UASs) in agricultural research. These systems are often referred to colloquially as drones and are composed of unoccupied/uncrewed/unmanned vehicles and incorporated sensors. This study of UASs involved 154 respondents from 21 countries representing various agricultural sectors. The survey identified three key applications considered most promising for UASs in agriculture: precision agriculture, crop phenotyping/plant breeding, and crop modeling. Over 80% of respondents rated UASs for phenotyping as valuable, with 47.6% considering them very valuable. Among the participants, 41% were already using UAS technology in their research, while 49% expressed interest in future adoption. Current users highly valued UASs for phenotyping, with 63.9% considering them very valuable, compared to 39.4% of potential future users. The study also explored barriers to UAS adoption. The most commonly reported barriers were the “High cost of instruments/devices or software” (46.0%) and the “Lack of knowledge or trained personnel to analyze data” (40.9%). These barriers persisted as top concerns for both current and potential future users. Respondents expressed a desire for detailed step‐by‐step protocols for drone data processing pipelines (34.7%) and in‐person training for personnel (16.5%) as valuable resources for UAS adoption. The research sheds light on the prevailing perceptions and challenges associated with UAS usage in agricultural research, emphasizing the potential of UASs in specific applications and identifying crucial barriers to address for wider adoption in the agricultural sector.","PeriodicalId":504448,"journal":{"name":"The Plant Phenome Journal","volume":"48 2","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-03-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140257474","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
UAV image acquisition and processing for high‐throughput phenotyping in agricultural research and breeding programs 无人机图像采集和处理,用于农业研究和育种计划中的高通量表型分析
Pub Date : 2024-02-19 DOI: 10.1002/ppj2.20096
Ocident Bongomin, Jimmy Lamo, Joshua Mugeziaubwa Guina, Collins Okello, Gilbert Gilibrays Ocen, Morish Obura, Simon Alibu, Cynthia Awuor Owino, A. Akwero, Samson Ojok
We are in a race against time to combat climate change and increase food production by 70% to feed the ever‐growing world population, which is expected to double by 2050. Agricultural research plays a vital role in improving crops and livestock through breeding programs and good agricultural practices, enabling sustainable agriculture and food systems. While advanced molecular breeding technologies have been widely adopted, phenotyping as an essential aspect of agricultural research and breeding programs has seen little development in most African institutions and remains a traditional method. However, the concept of high‐throughput phenotyping (HTP) has been gaining momentum, particularly in the context of unmanned aerial vehicle (UAV)‐based phenotyping. Although research into UAV‐based phenotyping is still limited, this paper aimed to provide a comprehensive overview and understanding of the use of UAV platforms and image analytics for HTP in agricultural research and to identify the key challenges and opportunities in this area. The paper discusses field phenotyping concepts, UAV classification and specifications, use cases of UAV‐based phenotyping, UAV imaging systems for phenotyping, and image processing and analytics methods. However, more research is required to optimize UAVs’ performance for image data acquisition, as limited studies have focused on the effect of UAVs’ operational parameters on data acquisition.
我们正在与时间赛跑,以应对气候变化并将粮食产量提高 70%,从而养活不断增长的世界人口,预计到 2050 年,世界人口将翻一番。农业研究在通过育种计划和良好农业实践改良作物和牲畜、实现可持续农业和粮食系统方面发挥着至关重要的作用。虽然先进的分子育种技术已被广泛采用,但表型分析作为农业研究和育种计划的一个重要方面,在大多数非洲机构中发展甚微,仍然是一种传统方法。不过,高通量表型技术(HTP)的概念已逐渐兴起,特别是在基于无人飞行器(UAV)的表型技术方面。尽管对基于无人机的表型技术的研究还很有限,但本文旨在全面概述和了解在农业研究中使用无人机平台和图像分析进行高通量表型的情况,并确定该领域的主要挑战和机遇。本文讨论了田间表型概念、无人机分类和规格、基于无人机的表型使用案例、用于表型的无人机成像系统以及图像处理和分析方法。然而,由于关注无人机操作参数对数据采集影响的研究有限,因此需要开展更多研究,以优化无人机的图像数据采集性能。
{"title":"UAV image acquisition and processing for high‐throughput phenotyping in agricultural research and breeding programs","authors":"Ocident Bongomin, Jimmy Lamo, Joshua Mugeziaubwa Guina, Collins Okello, Gilbert Gilibrays Ocen, Morish Obura, Simon Alibu, Cynthia Awuor Owino, A. Akwero, Samson Ojok","doi":"10.1002/ppj2.20096","DOIUrl":"https://doi.org/10.1002/ppj2.20096","url":null,"abstract":"We are in a race against time to combat climate change and increase food production by 70% to feed the ever‐growing world population, which is expected to double by 2050. Agricultural research plays a vital role in improving crops and livestock through breeding programs and good agricultural practices, enabling sustainable agriculture and food systems. While advanced molecular breeding technologies have been widely adopted, phenotyping as an essential aspect of agricultural research and breeding programs has seen little development in most African institutions and remains a traditional method. However, the concept of high‐throughput phenotyping (HTP) has been gaining momentum, particularly in the context of unmanned aerial vehicle (UAV)‐based phenotyping. Although research into UAV‐based phenotyping is still limited, this paper aimed to provide a comprehensive overview and understanding of the use of UAV platforms and image analytics for HTP in agricultural research and to identify the key challenges and opportunities in this area. The paper discusses field phenotyping concepts, UAV classification and specifications, use cases of UAV‐based phenotyping, UAV imaging systems for phenotyping, and image processing and analytics methods. However, more research is required to optimize UAVs’ performance for image data acquisition, as limited studies have focused on the effect of UAVs’ operational parameters on data acquisition.","PeriodicalId":504448,"journal":{"name":"The Plant Phenome Journal","volume":"26 5","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-02-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140450255","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Allometry and volumes in a nutshell: Analyzing walnut morphology using three‐dimensional X‐ray computed tomography 几何和体积简述:利用三维 X 射线计算机断层扫描分析核桃形态
Pub Date : 2024-02-19 DOI: 10.1002/ppj2.20095
Erik J. Amézquita, Michelle Y. Quigley, Patrick J. Brown, Elizabeth Munch, D. Chitwood
Persian walnuts (Juglans regia L.) are the second most produced and consumed tree nut, with over 2.6 million metric tons produced in the 2022–2023 harvest cycle alone. The United States is the second largest producer, accounting for 25% of the total global supply. Nonetheless, producers face an ever‐growing demand in a more uncertain climate landscape, which requires effective and efficient walnut selection and breeding of new cultivars with increased kernel content and easy‐to‐open shells. Past and current efforts select for these traits using hand‐held calipers and eye‐based evaluations. Yet there is plenty of morphology that meets the eye but goes unmeasured, such as the volume of inner air or the convexity of the kernel. Here, we study the shape of walnut fruits based on X‐ray computed tomography three‐dimensional reconstructions. We compute 49 different morphological phenotypes for 1264 individual nuts comprising 149 accessions. These phenotypes are complemented by traits of breeding interest such as ease of kernel removal and kernel‐to‐nut weight ratio. Through allometric relationships, relative growth of one tissue to another, we identify possible biophysical constraints at play during development. We explore multiple correlations between all morphological and commercial traits and identify which morphological traits can explain the most variability of commercial traits. We show that using only volume‐ and thickness‐based traits, especially inner air content, we can successfully encode several of the commercial traits.
波斯核桃(Juglans regia L.)是产量和消费量第二大的木本坚果,仅 2022-2023 年收获周期的产量就超过 260 万公吨。美国是第二大生产国,占全球总供应量的 25%。尽管如此,生产商面临着在更加不确定的气候条件下不断增长的需求,这就要求对核仁含量更高、外壳更易打开的核桃新品种进行有效和高效的选育。过去和现在的工作都是使用手持卡尺和肉眼评估来选择这些性状。然而,还有很多形态特征是肉眼可以看到但却无法测量的,例如核仁内部空气的体积或凸度。在此,我们基于 X 射线计算机断层扫描三维重建研究核桃果实的形状。我们计算了 149 个品种 1264 个核桃的 49 种不同形态表型。这些表型由育种兴趣性状(如核仁摘除难易度和核仁与核仁重量比)进行补充。通过异速生长关系(一种组织相对于另一种组织的相对生长),我们确定了发育过程中可能存在的生物物理制约因素。我们探讨了所有形态性状与商品性状之间的多重相关性,并确定了哪些形态性状可以解释商品性状的最大变异性。我们的研究表明,仅使用基于体积和厚度的性状,特别是内部空气含量,我们就能成功地对几种商业性状进行编码。
{"title":"Allometry and volumes in a nutshell: Analyzing walnut morphology using three‐dimensional X‐ray computed tomography","authors":"Erik J. Amézquita, Michelle Y. Quigley, Patrick J. Brown, Elizabeth Munch, D. Chitwood","doi":"10.1002/ppj2.20095","DOIUrl":"https://doi.org/10.1002/ppj2.20095","url":null,"abstract":"Persian walnuts (Juglans regia L.) are the second most produced and consumed tree nut, with over 2.6 million metric tons produced in the 2022–2023 harvest cycle alone. The United States is the second largest producer, accounting for 25% of the total global supply. Nonetheless, producers face an ever‐growing demand in a more uncertain climate landscape, which requires effective and efficient walnut selection and breeding of new cultivars with increased kernel content and easy‐to‐open shells. Past and current efforts select for these traits using hand‐held calipers and eye‐based evaluations. Yet there is plenty of morphology that meets the eye but goes unmeasured, such as the volume of inner air or the convexity of the kernel. Here, we study the shape of walnut fruits based on X‐ray computed tomography three‐dimensional reconstructions. We compute 49 different morphological phenotypes for 1264 individual nuts comprising 149 accessions. These phenotypes are complemented by traits of breeding interest such as ease of kernel removal and kernel‐to‐nut weight ratio. Through allometric relationships, relative growth of one tissue to another, we identify possible biophysical constraints at play during development. We explore multiple correlations between all morphological and commercial traits and identify which morphological traits can explain the most variability of commercial traits. We show that using only volume‐ and thickness‐based traits, especially inner air content, we can successfully encode several of the commercial traits.","PeriodicalId":504448,"journal":{"name":"The Plant Phenome Journal","volume":"89 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-02-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140449560","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Erratum to: Mixing things up! Identifying early diversity benefits and facilitating the development of improved variety mixtures with high throughput field phenotyping 勘误:混合物!利用高通量田间表型鉴定早期多样性益处并促进改良品种混合物的开发
Pub Date : 2024-01-30 DOI: 10.1002/ppj2.20093
Flavian Tschurr, Corina Oppliger, Samuel E. Wuest, N. Kirchgessner, Achim Walter
{"title":"Erratum to: Mixing things up! Identifying early diversity benefits and facilitating the development of improved variety mixtures with high throughput field phenotyping","authors":"Flavian Tschurr, Corina Oppliger, Samuel E. Wuest, N. Kirchgessner, Achim Walter","doi":"10.1002/ppj2.20093","DOIUrl":"https://doi.org/10.1002/ppj2.20093","url":null,"abstract":"","PeriodicalId":504448,"journal":{"name":"The Plant Phenome Journal","volume":"28 13","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-01-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140482375","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Toward improved image‐based root phenotyping: Handling temporal and cross‐site domain shifts in crop root segmentation models 改进基于图像的根系表型:处理作物根系细分模型中的时域和跨位点域变化
Pub Date : 2024-01-30 DOI: 10.1002/ppj2.20094
Travis Banet, Abraham George Smith, Rebecca K. McGrail, D. McNear, Hanna J. Poffenbarger
Crop root segmentation models developed through deep learning have increased the throughput of in situ crop phenotyping studies. However, models trained to identify roots in one image dataset may not accurately identify roots in another dataset, especially when the new dataset contains known differences, called domain shifts. The objective of this study was to quantify how model performance changes when models are used to segment image datasets that contain domain shifts and evaluate approaches to reduce error associated with domain shifts. We collected maize root images at two growth stages (V7 and R2) in a field experiment and manually segmented images to measure total root length (TRL). We developed five segmentation models and evaluated each model's ability to handle a temporal (growth‐stage) domain shift. For the V7 growth stage, a growth‐stage‐specific model trained only on images captured at the V7 growth stage was best suited for measuring TRL. At the R2 growth stage, combining images from both growth stages into a single dataset to train a model resulted in the most accurate TRL measurements. We applied two of the field models to images from a greenhouse experiment to evaluate how model performance changed when exposed to a cross‐site domain shift. Field models were less accurate than models trained only on the greenhouse images even when crop growth stage was identical. Although models may perform well for one experiment, model error increases when applied to images from different experiments even when crop species, growth stage, and soil type are similar.
通过深度学习开发的作物根系分割模型提高了原位作物表型研究的产量。然而,为识别一个图像数据集中的根而训练的模型可能无法准确识别另一个数据集中的根,尤其是当新数据集包含已知差异(称为域偏移)时。本研究的目的是量化当模型用于分割包含域偏移的图像数据集时,模型性能会发生怎样的变化,并评估减少与域偏移相关的误差的方法。我们在田间试验中收集了两个生长阶段(V7 和 R2)的玉米根图像,并手动分割图像以测量根的总长度 (TRL)。我们开发了五个分割模型,并评估了每个模型处理时间(生长阶段)域偏移的能力。在 V7 生长阶段,只在 V7 生长阶段捕获的图像上训练的特定生长阶段模型最适合测量 TRL。在 R2 生长阶段,将两个生长阶段的图像合并成一个数据集来训练模型,可获得最准确的 TRL 测量结果。我们将两个野外模型应用于温室实验的图像,以评估模型性能在暴露于跨站点域转移时的变化情况。即使在作物生长阶段相同的情况下,田间模型的准确性也低于仅在温室图像上训练的模型。虽然模型在一个实验中可能表现良好,但当模型应用于不同实验的图像时,即使作物种类、生长阶段和土壤类型相似,模型误差也会增加。
{"title":"Toward improved image‐based root phenotyping: Handling temporal and cross‐site domain shifts in crop root segmentation models","authors":"Travis Banet, Abraham George Smith, Rebecca K. McGrail, D. McNear, Hanna J. Poffenbarger","doi":"10.1002/ppj2.20094","DOIUrl":"https://doi.org/10.1002/ppj2.20094","url":null,"abstract":"Crop root segmentation models developed through deep learning have increased the throughput of in situ crop phenotyping studies. However, models trained to identify roots in one image dataset may not accurately identify roots in another dataset, especially when the new dataset contains known differences, called domain shifts. The objective of this study was to quantify how model performance changes when models are used to segment image datasets that contain domain shifts and evaluate approaches to reduce error associated with domain shifts. We collected maize root images at two growth stages (V7 and R2) in a field experiment and manually segmented images to measure total root length (TRL). We developed five segmentation models and evaluated each model's ability to handle a temporal (growth‐stage) domain shift. For the V7 growth stage, a growth‐stage‐specific model trained only on images captured at the V7 growth stage was best suited for measuring TRL. At the R2 growth stage, combining images from both growth stages into a single dataset to train a model resulted in the most accurate TRL measurements. We applied two of the field models to images from a greenhouse experiment to evaluate how model performance changed when exposed to a cross‐site domain shift. Field models were less accurate than models trained only on the greenhouse images even when crop growth stage was identical. Although models may perform well for one experiment, model error increases when applied to images from different experiments even when crop species, growth stage, and soil type are similar.","PeriodicalId":504448,"journal":{"name":"The Plant Phenome Journal","volume":"394 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-01-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140482834","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
期刊
The Plant Phenome Journal
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1