Colour guided ground-to-UAV fire segmentation

Rui Zhou, Tardi Tjahjadi
{"title":"Colour guided ground-to-UAV fire segmentation","authors":"Rui Zhou,&nbsp;Tardi Tjahjadi","doi":"10.1016/j.ophoto.2024.100076","DOIUrl":null,"url":null,"abstract":"<div><div>Leveraging ground-annotated data for scene analysis on unmanned aerial vehicles (UAVs) can lead to valuable real-world applications. However, existing unsupervised domain adaptive (UDA) methods primarily focus on domain confusion, which raises conflicts among training data if there is a huge domain shift caused by variations in observation perspectives or locations. To illustrate this problem, we present a ground-to-UAV fire segmentation method as a novel benchmark to verify typical UDA methods, and propose an effective framework, Colour-Mix, to boost the performance of the segmentation method equivalent to the fully supervised level. First, we identify domain-invariant fire features by deriving fire-discriminating components (u*VS) defined in colour spaces Lu*v*, YUV, and HSV. Notably, we devise criteria to combine components that are beneficial for integrating colour signals into deep-learning training, thus significantly improving the generalisation abilities of the framework without resorting to UDA techniques. Second, we perform class-specific mixing to eliminate irrelevant background content on the ground scenario and enrich annotated fire samples for the UAV imagery. Third, we propose to disentangle the feature encoding for different domains and use class-specific mixing as robust training signals for the target domain. The framework is validated on the drone-captured dataset, Flame, by using the combined ground-level source datasets, Street Fire and Corsica Wildfires. The code is available at <span><span>https://github.com/Rui-Zhou-2/Colour-Mix</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":100730,"journal":{"name":"ISPRS Open Journal of Photogrammetry and Remote Sensing","volume":"14 ","pages":"Article 100076"},"PeriodicalIF":0.0000,"publicationDate":"2024-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"ISPRS Open Journal of Photogrammetry and Remote Sensing","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2667393224000206","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Leveraging ground-annotated data for scene analysis on unmanned aerial vehicles (UAVs) can lead to valuable real-world applications. However, existing unsupervised domain adaptive (UDA) methods primarily focus on domain confusion, which raises conflicts among training data if there is a huge domain shift caused by variations in observation perspectives or locations. To illustrate this problem, we present a ground-to-UAV fire segmentation method as a novel benchmark to verify typical UDA methods, and propose an effective framework, Colour-Mix, to boost the performance of the segmentation method equivalent to the fully supervised level. First, we identify domain-invariant fire features by deriving fire-discriminating components (u*VS) defined in colour spaces Lu*v*, YUV, and HSV. Notably, we devise criteria to combine components that are beneficial for integrating colour signals into deep-learning training, thus significantly improving the generalisation abilities of the framework without resorting to UDA techniques. Second, we perform class-specific mixing to eliminate irrelevant background content on the ground scenario and enrich annotated fire samples for the UAV imagery. Third, we propose to disentangle the feature encoding for different domains and use class-specific mixing as robust training signals for the target domain. The framework is validated on the drone-captured dataset, Flame, by using the combined ground-level source datasets, Street Fire and Corsica Wildfires. The code is available at https://github.com/Rui-Zhou-2/Colour-Mix.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
彩色制导的地对无人机火力分割
利用无人飞行器(UAV)上的地面标注数据进行场景分析,可以带来有价值的实际应用。然而,现有的无监督领域自适应(UDA)方法主要侧重于领域混淆,如果观测视角或位置的变化导致领域发生巨大变化,则会引发训练数据之间的冲突。为了说明这个问题,我们提出了一种从地面到无人机的火灾分割方法,作为验证典型 UDA 方法的新基准,并提出了一个有效的框架--Colour-Mix,以提高分割方法的性能,使其达到完全监督的水平。首先,我们通过推导色彩空间 Lu*v*、YUV 和 HSV 中定义的火灾区分成分 (u*VS),识别出领域不变的火灾特征。值得注意的是,我们设计了将有利于将颜色信号整合到深度学习训练中的成分组合标准,从而在不采用 UDA 技术的情况下显著提高了框架的泛化能力。其次,我们进行了特定类别的混合,以消除地面场景中无关的背景内容,并丰富无人机图像的注释火灾样本。第三,我们建议将不同领域的特征编码分离开来,并使用特定类别混合作为目标领域的稳健训练信号。该框架在无人机捕获的数据集 "Flame "上进行了验证,并使用了综合地面源数据集 "Street Fire "和 "Corsica Wildfires"。代码见 https://github.com/Rui-Zhou-2/Colour-Mix。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
CiteScore
5.10
自引率
0.00%
发文量
0
期刊最新文献
Domain adaptation of deep neural networks for tree part segmentation using synthetic forest trees Colour guided ground-to-UAV fire segmentation Measuring nearshore waves at break point in 4D with Stereo-GoPro photogrammetry: A field comparison with multi-beam LiDAR and pressure sensors Automated extrinsic calibration of solid-state frame LiDAR sensors with non-overlapping field of view for monitoring indoor stockpile storage facilities Robust marker detection and identification using deep learning in underwater images for close range photogrammetry
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1