基于虚拟nir通道估计的RGB图像损伤植被分割深度卷积神经网络

IF 8.2 Q1 AGRICULTURE, MULTIDISCIPLINARY Artificial Intelligence in Agriculture Pub Date : 2022-01-01 DOI:10.1016/j.aiia.2022.09.004
Artzai Picon , Arantza Bereciartua-Perez , Itziar Eguskiza , Javier Romero-Rodriguez , Carlos Javier Jimenez-Ruiz , Till Eggers , Christian Klukas , Ramon Navarra-Mestre
{"title":"基于虚拟nir通道估计的RGB图像损伤植被分割深度卷积神经网络","authors":"Artzai Picon ,&nbsp;Arantza Bereciartua-Perez ,&nbsp;Itziar Eguskiza ,&nbsp;Javier Romero-Rodriguez ,&nbsp;Carlos Javier Jimenez-Ruiz ,&nbsp;Till Eggers ,&nbsp;Christian Klukas ,&nbsp;Ramon Navarra-Mestre","doi":"10.1016/j.aiia.2022.09.004","DOIUrl":null,"url":null,"abstract":"<div><p>Performing accurate and automated semantic segmentation of vegetation is a first algorithmic step towards more complex models that can extract accurate biological information on crop health, weed presence and phenological state, among others. Traditionally, models based on normalized difference vegetation index (NDVI), near infrared channel (NIR) or RGB have been a good indicator of vegetation presence. However, these methods are not suitable for accurately segmenting vegetation showing damage, which precludes their use for downstream phenotyping algorithms. In this paper, we propose a comprehensive method for robust vegetation segmentation in RGB images that can cope with damaged vegetation. The method consists of a first regression convolutional neural network to estimate a virtual NIR channel from an RGB image. Second, we compute two newly proposed vegetation indices from this estimated virtual NIR: the infrared-dark channel subtraction (IDCS) and infrared-dark channel ratio (IDCR) indices. Finally, both the RGB image and the estimated indices are fed into a semantic segmentation deep convolutional neural network to train a model to segment vegetation regardless of damage or condition. The model was tested on 84 plots containing thirteen vegetation species showing different degrees of damage and acquired over 28 days. The results show that the best segmentation is obtained when the input image is augmented with the proposed virtual NIR channel (F1=<span><math><mrow><mn>0.94</mn></mrow></math></span>) and with the proposed IDCR and IDCS vegetation indices (F1=<span><math><mrow><mn>0.95</mn></mrow></math></span>) derived from the estimated NIR channel, while the use of only the image or RGB indices lead to inferior performance (RGB(F1=<span><math><mrow><mn>0.90</mn></mrow></math></span>) NIR(F1=<span><math><mrow><mn>0.82</mn></mrow></math></span>) or NDVI(F1=<span><math><mrow><mn>0.89</mn></mrow></math></span>) channel). The proposed method provides an end-to-end land cover map segmentation method directly from simple RGB images and has been successfully validated in real field conditions.</p></div>","PeriodicalId":52814,"journal":{"name":"Artificial Intelligence in Agriculture","volume":null,"pages":null},"PeriodicalIF":8.2000,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2589721722000149/pdfft?md5=7a7b57022fcb447214437ad350ab186f&pid=1-s2.0-S2589721722000149-main.pdf","citationCount":"1","resultStr":"{\"title\":\"Deep convolutional neural network for damaged vegetation segmentation from RGB images based on virtual NIR-channel estimation\",\"authors\":\"Artzai Picon ,&nbsp;Arantza Bereciartua-Perez ,&nbsp;Itziar Eguskiza ,&nbsp;Javier Romero-Rodriguez ,&nbsp;Carlos Javier Jimenez-Ruiz ,&nbsp;Till Eggers ,&nbsp;Christian Klukas ,&nbsp;Ramon Navarra-Mestre\",\"doi\":\"10.1016/j.aiia.2022.09.004\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><p>Performing accurate and automated semantic segmentation of vegetation is a first algorithmic step towards more complex models that can extract accurate biological information on crop health, weed presence and phenological state, among others. Traditionally, models based on normalized difference vegetation index (NDVI), near infrared channel (NIR) or RGB have been a good indicator of vegetation presence. However, these methods are not suitable for accurately segmenting vegetation showing damage, which precludes their use for downstream phenotyping algorithms. In this paper, we propose a comprehensive method for robust vegetation segmentation in RGB images that can cope with damaged vegetation. The method consists of a first regression convolutional neural network to estimate a virtual NIR channel from an RGB image. Second, we compute two newly proposed vegetation indices from this estimated virtual NIR: the infrared-dark channel subtraction (IDCS) and infrared-dark channel ratio (IDCR) indices. Finally, both the RGB image and the estimated indices are fed into a semantic segmentation deep convolutional neural network to train a model to segment vegetation regardless of damage or condition. The model was tested on 84 plots containing thirteen vegetation species showing different degrees of damage and acquired over 28 days. The results show that the best segmentation is obtained when the input image is augmented with the proposed virtual NIR channel (F1=<span><math><mrow><mn>0.94</mn></mrow></math></span>) and with the proposed IDCR and IDCS vegetation indices (F1=<span><math><mrow><mn>0.95</mn></mrow></math></span>) derived from the estimated NIR channel, while the use of only the image or RGB indices lead to inferior performance (RGB(F1=<span><math><mrow><mn>0.90</mn></mrow></math></span>) NIR(F1=<span><math><mrow><mn>0.82</mn></mrow></math></span>) or NDVI(F1=<span><math><mrow><mn>0.89</mn></mrow></math></span>) channel). The proposed method provides an end-to-end land cover map segmentation method directly from simple RGB images and has been successfully validated in real field conditions.</p></div>\",\"PeriodicalId\":52814,\"journal\":{\"name\":\"Artificial Intelligence in Agriculture\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":8.2000,\"publicationDate\":\"2022-01-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://www.sciencedirect.com/science/article/pii/S2589721722000149/pdfft?md5=7a7b57022fcb447214437ad350ab186f&pid=1-s2.0-S2589721722000149-main.pdf\",\"citationCount\":\"1\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Artificial Intelligence in Agriculture\",\"FirstCategoryId\":\"1087\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S2589721722000149\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"AGRICULTURE, MULTIDISCIPLINARY\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Artificial Intelligence in Agriculture","FirstCategoryId":"1087","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2589721722000149","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"AGRICULTURE, MULTIDISCIPLINARY","Score":null,"Total":0}
引用次数: 1

摘要

对植被进行准确和自动化的语义分割是迈向更复杂模型的第一步,这些模型可以提取作物健康、杂草存在和物候状态等准确的生物信息。传统上,基于归一化植被指数(NDVI)、近红外通道(NIR)或RGB的模型是植被存在的良好指标。然而,这些方法不适合准确分割显示损伤的植被,这妨碍了它们用于下游表型算法。在本文中,我们提出了一种综合的RGB图像鲁棒植被分割方法,该方法可以处理受损植被。该方法采用一阶回归卷积神经网络从RGB图像中估计虚拟近红外通道。其次,我们计算了两个新提出的植被指数:红外-暗通道减法(IDCS)和红外-暗通道比(IDCR)指数。最后,将RGB图像和估计的指标输入到语义分割深度卷积神经网络中,训练一个模型来分割植被,而不考虑损伤或状况。该模型在84个样地上进行了28天的试验,这些样地包含13种不同程度的植被。结果表明:采用虚拟近红外通道(F1=0.94)和基于估计近红外通道的IDCR和IDCS植被指数(F1=0.95)增强输入图像可获得最佳分割效果,而仅使用图像或RGB指数会导致较差的分割效果(RGB(F1=0.90) NIR(F1=0.82)或NDVI(F1=0.89)通道)。该方法直接从简单的RGB图像中提供端到端的土地覆盖图分割方法,并在实际野外条件下成功验证。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Deep convolutional neural network for damaged vegetation segmentation from RGB images based on virtual NIR-channel estimation

Performing accurate and automated semantic segmentation of vegetation is a first algorithmic step towards more complex models that can extract accurate biological information on crop health, weed presence and phenological state, among others. Traditionally, models based on normalized difference vegetation index (NDVI), near infrared channel (NIR) or RGB have been a good indicator of vegetation presence. However, these methods are not suitable for accurately segmenting vegetation showing damage, which precludes their use for downstream phenotyping algorithms. In this paper, we propose a comprehensive method for robust vegetation segmentation in RGB images that can cope with damaged vegetation. The method consists of a first regression convolutional neural network to estimate a virtual NIR channel from an RGB image. Second, we compute two newly proposed vegetation indices from this estimated virtual NIR: the infrared-dark channel subtraction (IDCS) and infrared-dark channel ratio (IDCR) indices. Finally, both the RGB image and the estimated indices are fed into a semantic segmentation deep convolutional neural network to train a model to segment vegetation regardless of damage or condition. The model was tested on 84 plots containing thirteen vegetation species showing different degrees of damage and acquired over 28 days. The results show that the best segmentation is obtained when the input image is augmented with the proposed virtual NIR channel (F1=0.94) and with the proposed IDCR and IDCS vegetation indices (F1=0.95) derived from the estimated NIR channel, while the use of only the image or RGB indices lead to inferior performance (RGB(F1=0.90) NIR(F1=0.82) or NDVI(F1=0.89) channel). The proposed method provides an end-to-end land cover map segmentation method directly from simple RGB images and has been successfully validated in real field conditions.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Artificial Intelligence in Agriculture
Artificial Intelligence in Agriculture Engineering-Engineering (miscellaneous)
CiteScore
21.60
自引率
0.00%
发文量
18
审稿时长
12 weeks
期刊最新文献
Prediction of spatial heterogeneity in nutrient-limited sub-tropical maize yield: Implications for precision management in the eastern Indo-Gangetic Plains UAV-based field watermelon detection and counting using YOLOv8s with image panorama stitching and overlap partitioning Comparing YOLOv8 and Mask R-CNN for instance segmentation in complex orchard environments A comprehensive survey on weed and crop classification using machine learning and deep learning Computer vision in smart agriculture and precision farming: Techniques and applications
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1