FO-Net: An advanced deep learning network for individual tree identification using UAV high-resolution images

IF 10.6 1区 地球科学 Q1 GEOGRAPHY, PHYSICAL ISPRS Journal of Photogrammetry and Remote Sensing Pub Date : 2024-12-28 DOI:10.1016/j.isprsjprs.2024.12.020
Jian Zeng, Xin Shen, Kai Zhou, Lin Cao
{"title":"FO-Net: An advanced deep learning network for individual tree identification using UAV high-resolution images","authors":"Jian Zeng, Xin Shen, Kai Zhou, Lin Cao","doi":"10.1016/j.isprsjprs.2024.12.020","DOIUrl":null,"url":null,"abstract":"The identification of individual trees can reveal the competitive and symbiotic relationships among trees within forest stands, which is fundamental understand biodiversity and forest ecosystems. Highly precise identification of individual trees can significantly improve the efficiency of forest resource inventory, and is valuable for biomass measurement and forest carbon storage assessment. In previous studies through deep learning approaches for identifying individual tree, feature extraction is usually difficult to adapt to the variation of tree crown architecture, and the loss of feature information in the multi-scale fusion process is also a marked challenge for extracting trees by remote sensing images. Based on the one-stage deep learning network structure, this study improves and optimizes the three stages of feature extraction, feature fusion and feature identification in deep learning methods, and constructs a novel feature-oriented individual tree identification network (FO-Net) suitable for UAV high-resolution images. Firstly, an adaptive feature extraction algorithm based on variable position drift convolution was proposed, which improved the feature extraction ability for the individual tree with various crown size and shape in UAV images. Secondly, to enhance the network’s ability to fuse multiscale forest features, a feature fusion algorithm based on the “gather-and-distribute” mechanism is proposed in the feature pyramid network, which realizes the lossless cross-layer transmission of feature map information. Finally, in the stage of individual tree identification, a unified self-attention identification head is introduced to enhanced FO-Net’s perception ability to identify the trees with small crown diameters. FO-Net achieved the best performance in quantitative analysis experiments on self-constructed datasets, with mAP50, F1-score, Precision, and Recall of 90.7%, 0.85, 85.8%, and 82.8%, respectively, realizing a relatively high accuracy for individual tree identification compared to the traditional deep learning methods. The proposed feature extraction and fusion algorithms have improved the accuracy of individual tree identification by 1.1% and 2.7% respectively. The qualitative experiments based on Grad-CAM heat maps also demonstrate that FO-Net can focus more on the contours of an individual tree in high-resolution images, and reduce the influence of background factors during feature extraction and individual tree identification. FO-Net deep learning network improves the accuracy of individual trees identification in UAV high-resolution images without significantly increasing the parameters of the network, which provides a reliable method to support various tasks in fine-scale precision forestry.","PeriodicalId":50269,"journal":{"name":"ISPRS Journal of Photogrammetry and Remote Sensing","volume":"83 1","pages":""},"PeriodicalIF":10.6000,"publicationDate":"2024-12-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"ISPRS Journal of Photogrammetry and Remote Sensing","FirstCategoryId":"5","ListUrlMain":"https://doi.org/10.1016/j.isprsjprs.2024.12.020","RegionNum":1,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"GEOGRAPHY, PHYSICAL","Score":null,"Total":0}
引用次数: 0

Abstract

The identification of individual trees can reveal the competitive and symbiotic relationships among trees within forest stands, which is fundamental understand biodiversity and forest ecosystems. Highly precise identification of individual trees can significantly improve the efficiency of forest resource inventory, and is valuable for biomass measurement and forest carbon storage assessment. In previous studies through deep learning approaches for identifying individual tree, feature extraction is usually difficult to adapt to the variation of tree crown architecture, and the loss of feature information in the multi-scale fusion process is also a marked challenge for extracting trees by remote sensing images. Based on the one-stage deep learning network structure, this study improves and optimizes the three stages of feature extraction, feature fusion and feature identification in deep learning methods, and constructs a novel feature-oriented individual tree identification network (FO-Net) suitable for UAV high-resolution images. Firstly, an adaptive feature extraction algorithm based on variable position drift convolution was proposed, which improved the feature extraction ability for the individual tree with various crown size and shape in UAV images. Secondly, to enhance the network’s ability to fuse multiscale forest features, a feature fusion algorithm based on the “gather-and-distribute” mechanism is proposed in the feature pyramid network, which realizes the lossless cross-layer transmission of feature map information. Finally, in the stage of individual tree identification, a unified self-attention identification head is introduced to enhanced FO-Net’s perception ability to identify the trees with small crown diameters. FO-Net achieved the best performance in quantitative analysis experiments on self-constructed datasets, with mAP50, F1-score, Precision, and Recall of 90.7%, 0.85, 85.8%, and 82.8%, respectively, realizing a relatively high accuracy for individual tree identification compared to the traditional deep learning methods. The proposed feature extraction and fusion algorithms have improved the accuracy of individual tree identification by 1.1% and 2.7% respectively. The qualitative experiments based on Grad-CAM heat maps also demonstrate that FO-Net can focus more on the contours of an individual tree in high-resolution images, and reduce the influence of background factors during feature extraction and individual tree identification. FO-Net deep learning network improves the accuracy of individual trees identification in UAV high-resolution images without significantly increasing the parameters of the network, which provides a reliable method to support various tasks in fine-scale precision forestry.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
FO-Net:一种先进的深度学习网络,用于使用无人机高分辨率图像识别单个树木
单株树木的鉴定可以揭示林分内树木之间的竞争和共生关系,这是了解生物多样性和森林生态系统的基础。林木单株的高精度识别可显著提高森林资源清查效率,对生物量测量和森林碳储量评估具有重要价值。在以往利用深度学习方法识别单棵树的研究中,特征提取通常难以适应树冠结构的变化,并且在多尺度融合过程中特征信息的丢失也是遥感图像提取树木的一个显著挑战。本研究在单阶段深度学习网络结构的基础上,对深度学习方法中的特征提取、特征融合和特征识别三个阶段进行改进和优化,构建了一种适合无人机高分辨率图像的面向特征的个体树识别网络(FO-Net)。首先,提出了一种基于变位置漂移卷积的自适应特征提取算法,提高了无人机图像中不同树冠大小和形状的单株树的特征提取能力;其次,为了增强网络对多尺度森林特征的融合能力,在特征金字塔网络中提出了一种基于“采集-分布”机制的特征融合算法,实现了特征图信息的无损跨层传输;最后,在单株树识别阶段,引入统一的自注意识别头,增强FO-Net对小树冠直径树的感知能力。FO-Net在自建数据集的定量分析实验中表现最好,mAP50、f1 score、Precision和Recall分别为90.7%、0.85、85.8%和82.8%,相对于传统的深度学习方法,实现了相对较高的个体树识别准确率。所提出的特征提取和融合算法分别将单个树的识别准确率提高了1.1%和2.7%。基于grado - cam热图的定性实验也表明,FO-Net可以在高分辨率图像中更加关注单个树的轮廓,并在特征提取和单个树识别过程中减少背景因素的影响。FO-Net深度学习网络在不显著增加网络参数的情况下,提高了无人机高分辨率图像中单株树识别的精度,为支持精细尺度精准林业的各种任务提供了可靠的方法。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
ISPRS Journal of Photogrammetry and Remote Sensing
ISPRS Journal of Photogrammetry and Remote Sensing 工程技术-成像科学与照相技术
CiteScore
21.00
自引率
6.30%
发文量
273
审稿时长
40 days
期刊介绍: The ISPRS Journal of Photogrammetry and Remote Sensing (P&RS) serves as the official journal of the International Society for Photogrammetry and Remote Sensing (ISPRS). It acts as a platform for scientists and professionals worldwide who are involved in various disciplines that utilize photogrammetry, remote sensing, spatial information systems, computer vision, and related fields. The journal aims to facilitate communication and dissemination of advancements in these disciplines, while also acting as a comprehensive source of reference and archive. P&RS endeavors to publish high-quality, peer-reviewed research papers that are preferably original and have not been published before. These papers can cover scientific/research, technological development, or application/practical aspects. Additionally, the journal welcomes papers that are based on presentations from ISPRS meetings, as long as they are considered significant contributions to the aforementioned fields. In particular, P&RS encourages the submission of papers that are of broad scientific interest, showcase innovative applications (especially in emerging fields), have an interdisciplinary focus, discuss topics that have received limited attention in P&RS or related journals, or explore new directions in scientific or professional realms. It is preferred that theoretical papers include practical applications, while papers focusing on systems and applications should include a theoretical background.
期刊最新文献
GN-GCN: Grid neighborhood-based graph convolutional network for spatio-temporal knowledge graph reasoning An interactive fusion attention-guided network for ground surface hot spring fluids segmentation in dual-spectrum UAV images Near-surface air temperature estimation for areas with sparse observations based on transfer learning Contribution of ECOSTRESS thermal imagery to wetland mapping: Application to heathland ecosystems Generative networks for spatio-temporal gap filling of Sentinel-2 reflectances
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1