A SAM-adapted weakly-supervised semantic segmentation method constrained by uncertainty and transformation consistency

Yinxia Cao , Xin Huang , Qihao Weng
{"title":"A SAM-adapted weakly-supervised semantic segmentation method constrained by uncertainty and transformation consistency","authors":"Yinxia Cao ,&nbsp;Xin Huang ,&nbsp;Qihao Weng","doi":"10.1016/j.jag.2025.104440","DOIUrl":null,"url":null,"abstract":"<div><div>Semantic segmentation of remote sensing imagery is a fundamental task to generate pixel-wise category maps. Existing deep learning networks rely heavily on dense pixel-wise labels, incurring high acquisition costs. Given this challenge, this study introduces sparse point labels, a type of cost-effective weak labels, for semantic segmentation. Existing weakly-supervised methods often leverage low-level visual or high-level semantic features from networks to generate supervision information for unlabeled pixels, which can easily lead to the issue of label noises. Furthermore, these methods rarely explore the general-purpose foundation model, segment anything model (SAM), with strong zero-shot generalization capacity in image segmentation. In this paper, we proposed a SAM-adapted weakly-supervised method with three components: 1) an adapted EfficientViT-SAM network (AESAM) for semantic segmentation guided by point labels, 2) an uncertainty-based pseudo-label generation module to select reliable pseudo-labels for supervising unlabeled pixels, and 3) a transformation consistency constraint for enhancing AESAM’s robustness to data perturbations. The proposed method was tested on the ISPRS Vaihingen dataset (collected from airplane), the Zurich Summer dataset (satellite), and the UAVid dataset (drone). Results demonstrated a significant improvement in mean F1 (by 5.89 %–10.56 %) and mean IoU (by 5.95 %–11.13 %) compared to the baseline method. Compared to the closest competitors, there was an increase in mean F1 (by 0.83 %–5.29 %) and mean IoU (by 1.04 %–6.54 %). Furthermore, our approach requires only fine-tuning a small number of parameters (0.9 M) using cheap point labels, making it promising for scenarios with limited labeling budgets. The code is available at <span><span>https://github.com/lauraset/SAM-UTC-WSSS</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":73423,"journal":{"name":"International journal of applied earth observation and geoinformation : ITC journal","volume":"137 ","pages":"Article 104440"},"PeriodicalIF":7.6000,"publicationDate":"2025-02-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"International journal of applied earth observation and geoinformation : ITC journal","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1569843225000871","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"REMOTE SENSING","Score":null,"Total":0}
引用次数: 0

Abstract

Semantic segmentation of remote sensing imagery is a fundamental task to generate pixel-wise category maps. Existing deep learning networks rely heavily on dense pixel-wise labels, incurring high acquisition costs. Given this challenge, this study introduces sparse point labels, a type of cost-effective weak labels, for semantic segmentation. Existing weakly-supervised methods often leverage low-level visual or high-level semantic features from networks to generate supervision information for unlabeled pixels, which can easily lead to the issue of label noises. Furthermore, these methods rarely explore the general-purpose foundation model, segment anything model (SAM), with strong zero-shot generalization capacity in image segmentation. In this paper, we proposed a SAM-adapted weakly-supervised method with three components: 1) an adapted EfficientViT-SAM network (AESAM) for semantic segmentation guided by point labels, 2) an uncertainty-based pseudo-label generation module to select reliable pseudo-labels for supervising unlabeled pixels, and 3) a transformation consistency constraint for enhancing AESAM’s robustness to data perturbations. The proposed method was tested on the ISPRS Vaihingen dataset (collected from airplane), the Zurich Summer dataset (satellite), and the UAVid dataset (drone). Results demonstrated a significant improvement in mean F1 (by 5.89 %–10.56 %) and mean IoU (by 5.95 %–11.13 %) compared to the baseline method. Compared to the closest competitors, there was an increase in mean F1 (by 0.83 %–5.29 %) and mean IoU (by 1.04 %–6.54 %). Furthermore, our approach requires only fine-tuning a small number of parameters (0.9 M) using cheap point labels, making it promising for scenarios with limited labeling budgets. The code is available at https://github.com/lauraset/SAM-UTC-WSSS.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
遥感图像的语义分割是生成像素分类图的一项基本任务。现有的深度学习网络在很大程度上依赖于高密度的像素标签,从而产生了高昂的获取成本。考虑到这一挑战,本研究引入了稀疏点标签,一种经济高效的弱标签,用于语义分割。现有的弱监督方法通常利用网络中的低级视觉或高级语义特征来生成未标记像素的监督信息,这很容易导致标签噪声问题。此外,这些方法很少探索通用的基础模型--segment anything model(SAM),它在图像分割中具有很强的零点泛化能力。在本文中,我们提出了一种与 SAM 相适应的弱监督方法,该方法由三个部分组成:1)一个适配的 EfficientViT-SAM 网络(AESAM),用于以点标签为指导进行语义分割;2)一个基于不确定性的伪标签生成模块,用于选择可靠的伪标签来监督未标记的像素;3)一个变换一致性约束,用于增强 AESAM 对数据扰动的鲁棒性。所提出的方法在 ISPRS Vaihingen 数据集(飞机采集)、苏黎世夏季数据集(卫星)和 UAVid 数据集(无人机)上进行了测试。结果表明,与基线方法相比,平均 F1(提高 5.89 %-10.56 %)和平均 IoU(提高 5.95 %-11.13 %)均有显著提高。与最接近的竞争对手相比,平均 F1(提高 0.83 %-5.29 %)和平均 IoU(提高 1.04 %-6.54 %)均有提高。此外,我们的方法只需要使用廉价的点标签对少量参数(0.9 M)进行微调,这使其在标签预算有限的情况下大有可为。代码见 https://github.com/lauraset/SAM-UTC-WSSS。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
International journal of applied earth observation and geoinformation : ITC journal
International journal of applied earth observation and geoinformation : ITC journal Global and Planetary Change, Management, Monitoring, Policy and Law, Earth-Surface Processes, Computers in Earth Sciences
CiteScore
12.00
自引率
0.00%
发文量
0
审稿时长
77 days
期刊介绍: The International Journal of Applied Earth Observation and Geoinformation publishes original papers that utilize earth observation data for natural resource and environmental inventory and management. These data primarily originate from remote sensing platforms, including satellites and aircraft, supplemented by surface and subsurface measurements. Addressing natural resources such as forests, agricultural land, soils, and water, as well as environmental concerns like biodiversity, land degradation, and hazards, the journal explores conceptual and data-driven approaches. It covers geoinformation themes like capturing, databasing, visualization, interpretation, data quality, and spatial uncertainty.
期刊最新文献
Editorial Board Near real-time land surface temperature reconstruction from FY-4A satellite using spatio-temporal attention network Assessing urban residents’ exposure to greenspace in daily travel from a dockless bike-sharing lens Satellite retrieval of bottom reflectance from high-spatial-resolution multispectral imagery in shallow coral reef waters Using street view imagery and localized crowdsourcing survey to model perceived safety of the visual built environment by gender
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1