{"title":"INVITATION: A Framework for Enhancing UAV Image Semantic Segmentation Accuracy Through Depth Information Fusion","authors":"Xiaodong Zhang;Wenlin Zhou;Guanzhou Chen;Jiaqi Wang;Qingyuan Yang;Xiaoliang Tan;Tong Wang;Yifei Chen","doi":"10.1109/LGRS.2025.3534994","DOIUrl":null,"url":null,"abstract":"With the increasing use of uncrewed aerial vehicles (UAVs), improving the accuracy of semantic segmentation is becoming critical. Depth information preserves geometric structure, serving as an invaluable supplement to color-rich UAV imagery. Inspired by this, we proposed a novel framework named INVITATION, which exclusively takes original UAV imagery as input, yet is capable of obtaining complemented depth information and fusing into RGB semantic segmentation models effectively, thereby enhancing UAV semantic segmentation accuracy. Concretely, this framework supports two distinct depth generation approaches: high-precision multiview stereo (MVS) depth reconstruction using multiple views or video sequences via structure from motion (SfM) and monocular depth estimation using individual images. Our empirical evaluations conducted on the UAVid dataset showed that mIoU metric of INVITATION used precise reconstructed depth maps via MVS improved from 66.02% to 70.57%, while used depth predictions from pretrained models reached 69.69%, which supports the effectiveness of extracting and fusing depth information from original imagery in enhancing UAV semantic segmentation. This study explores a novel approach to acquire UAV multimodal information at low data cost, highlights the advantages of incorporating depth information into UAV semantic analysis, and paves the way for further studies on the integration of multimodal UAV information. Our code is available at <uri>https://github.com/CVEO/INVITATION</uri>.","PeriodicalId":91017,"journal":{"name":"IEEE geoscience and remote sensing letters : a publication of the IEEE Geoscience and Remote Sensing Society","volume":"22 ","pages":"1-5"},"PeriodicalIF":0.0000,"publicationDate":"2025-01-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE geoscience and remote sensing letters : a publication of the IEEE Geoscience and Remote Sensing Society","FirstCategoryId":"1085","ListUrlMain":"https://ieeexplore.ieee.org/document/10858079/","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
With the increasing use of uncrewed aerial vehicles (UAVs), improving the accuracy of semantic segmentation is becoming critical. Depth information preserves geometric structure, serving as an invaluable supplement to color-rich UAV imagery. Inspired by this, we proposed a novel framework named INVITATION, which exclusively takes original UAV imagery as input, yet is capable of obtaining complemented depth information and fusing into RGB semantic segmentation models effectively, thereby enhancing UAV semantic segmentation accuracy. Concretely, this framework supports two distinct depth generation approaches: high-precision multiview stereo (MVS) depth reconstruction using multiple views or video sequences via structure from motion (SfM) and monocular depth estimation using individual images. Our empirical evaluations conducted on the UAVid dataset showed that mIoU metric of INVITATION used precise reconstructed depth maps via MVS improved from 66.02% to 70.57%, while used depth predictions from pretrained models reached 69.69%, which supports the effectiveness of extracting and fusing depth information from original imagery in enhancing UAV semantic segmentation. This study explores a novel approach to acquire UAV multimodal information at low data cost, highlights the advantages of incorporating depth information into UAV semantic analysis, and paves the way for further studies on the integration of multimodal UAV information. Our code is available at https://github.com/CVEO/INVITATION.