INVITATION: A Framework for Enhancing UAV Image Semantic Segmentation Accuracy Through Depth Information Fusion

Xiaodong Zhang;Wenlin Zhou;Guanzhou Chen;Jiaqi Wang;Qingyuan Yang;Xiaoliang Tan;Tong Wang;Yifei Chen
{"title":"INVITATION: A Framework for Enhancing UAV Image Semantic Segmentation Accuracy Through Depth Information Fusion","authors":"Xiaodong Zhang;Wenlin Zhou;Guanzhou Chen;Jiaqi Wang;Qingyuan Yang;Xiaoliang Tan;Tong Wang;Yifei Chen","doi":"10.1109/LGRS.2025.3534994","DOIUrl":null,"url":null,"abstract":"With the increasing use of uncrewed aerial vehicles (UAVs), improving the accuracy of semantic segmentation is becoming critical. Depth information preserves geometric structure, serving as an invaluable supplement to color-rich UAV imagery. Inspired by this, we proposed a novel framework named INVITATION, which exclusively takes original UAV imagery as input, yet is capable of obtaining complemented depth information and fusing into RGB semantic segmentation models effectively, thereby enhancing UAV semantic segmentation accuracy. Concretely, this framework supports two distinct depth generation approaches: high-precision multiview stereo (MVS) depth reconstruction using multiple views or video sequences via structure from motion (SfM) and monocular depth estimation using individual images. Our empirical evaluations conducted on the UAVid dataset showed that mIoU metric of INVITATION used precise reconstructed depth maps via MVS improved from 66.02% to 70.57%, while used depth predictions from pretrained models reached 69.69%, which supports the effectiveness of extracting and fusing depth information from original imagery in enhancing UAV semantic segmentation. This study explores a novel approach to acquire UAV multimodal information at low data cost, highlights the advantages of incorporating depth information into UAV semantic analysis, and paves the way for further studies on the integration of multimodal UAV information. Our code is available at <uri>https://github.com/CVEO/INVITATION</uri>.","PeriodicalId":91017,"journal":{"name":"IEEE geoscience and remote sensing letters : a publication of the IEEE Geoscience and Remote Sensing Society","volume":"22 ","pages":"1-5"},"PeriodicalIF":4.4000,"publicationDate":"2025-01-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE geoscience and remote sensing letters : a publication of the IEEE Geoscience and Remote Sensing Society","FirstCategoryId":"1085","ListUrlMain":"https://ieeexplore.ieee.org/document/10858079/","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

With the increasing use of uncrewed aerial vehicles (UAVs), improving the accuracy of semantic segmentation is becoming critical. Depth information preserves geometric structure, serving as an invaluable supplement to color-rich UAV imagery. Inspired by this, we proposed a novel framework named INVITATION, which exclusively takes original UAV imagery as input, yet is capable of obtaining complemented depth information and fusing into RGB semantic segmentation models effectively, thereby enhancing UAV semantic segmentation accuracy. Concretely, this framework supports two distinct depth generation approaches: high-precision multiview stereo (MVS) depth reconstruction using multiple views or video sequences via structure from motion (SfM) and monocular depth estimation using individual images. Our empirical evaluations conducted on the UAVid dataset showed that mIoU metric of INVITATION used precise reconstructed depth maps via MVS improved from 66.02% to 70.57%, while used depth predictions from pretrained models reached 69.69%, which supports the effectiveness of extracting and fusing depth information from original imagery in enhancing UAV semantic segmentation. This study explores a novel approach to acquire UAV multimodal information at low data cost, highlights the advantages of incorporating depth information into UAV semantic analysis, and paves the way for further studies on the integration of multimodal UAV information. Our code is available at https://github.com/CVEO/INVITATION.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
基于深度信息融合提高无人机图像语义分割精度的框架
随着无人驾驶飞行器(uav)的日益普及,提高语义分割的准确性变得至关重要。深度信息保留几何结构,作为丰富颜色的无人机图像的宝贵补充。受此启发,我们提出了一种新的框架invite,该框架仅以原始无人机图像作为输入,但能够获得补充的深度信息并有效融合到RGB语义分割模型中,从而提高了无人机语义分割的精度。具体而言,该框架支持两种不同的深度生成方法:通过运动结构(SfM)使用多视图或视频序列进行高精度多视图立体(MVS)深度重建,以及使用单个图像进行单目深度估计。在UAVid数据集上进行的实证评估表明,邀请函使用MVS精确重建深度图的mIoU度量值从66.02%提高到70.57%,使用预训练模型的深度预测值达到69.69%,证明了从原始图像中提取和融合深度信息在增强无人机语义分割方面的有效性。本研究探索了一种低数据成本获取无人机多模态信息的新方法,突出了将深度信息纳入无人机语义分析的优势,为进一步研究无人机多模态信息集成铺平了道路。我们的代码可在https://github.com/CVEO/INVITATION上获得。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Dip-Guided Poststack Inversion via Structure-Tensor Regularization IEEE Geoscience and Remote Sensing Letters Institutional Listings IEEE Geoscience and Remote Sensing Letters information for authors Corrections to “Spire Near-Nadir GNSS-R for Sea Ice Detection: First Results” High-Frequency GPR Data Reconstruction With Conditional GAN and Contrastive Unpaired Translation
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1