WeakCLIP: Adapting CLIP for Weakly-Supervised Semantic Segmentation

IF 3.5 2区 生物学 Q2 BIOCHEMISTRY & MOLECULAR BIOLOGY ACS Chemical Biology Pub Date : 2024-09-05 DOI:10.1007/s11263-024-02224-2
Lianghui Zhu, Xinggang Wang, Jiapei Feng, Tianheng Cheng, Yingyue Li, Bo Jiang, Dingwen Zhang, Junwei Han
{"title":"WeakCLIP: Adapting CLIP for Weakly-Supervised Semantic Segmentation","authors":"Lianghui Zhu, Xinggang Wang, Jiapei Feng, Tianheng Cheng, Yingyue Li, Bo Jiang, Dingwen Zhang, Junwei Han","doi":"10.1007/s11263-024-02224-2","DOIUrl":null,"url":null,"abstract":"<p>Contrastive language and image pre-training (CLIP) achieves great success in various computer vision tasks and also presents an opportune avenue for enhancing weakly-supervised image understanding with its large-scale pre-trained knowledge. As an effective way to reduce the reliance on pixel-level human-annotated labels, weakly-supervised semantic segmentation (WSSS) aims to refine the class activation map (CAM) and produce high-quality pseudo masks. Weakly-supervised semantic segmentation (WSSS) aims to refine the class activation map (CAM) as pseudo masks, but heavily relies on inductive biases like hand-crafted priors and digital image processing methods. For the vision-language pre-trained model, i.e. CLIP, we propose a novel text-to-pixel matching paradigm for WSSS. However, directly applying CLIP to WSSS is challenging due to three critical problems: (1) the task gap between contrastive pre-training and WSSS CAM refinement, (2) lacking text-to-pixel modeling to fully utilize the pre-trained knowledge, and (3) the insufficient details owning to the <span>\\(\\frac{1}{16}\\)</span> down-sampling resolution of ViT. Thus, we propose WeakCLIP to address the problems and leverage the pre-trained knowledge from CLIP to WSSS. Specifically, we first address the task gap by proposing a pyramid adapter and learnable prompts to extract WSSS-specific representation. We then design a co-attention matching module to model text-to-pixel relationships. Finally, the pyramid adapter and text-guided decoder are introduced to gather multi-level information and integrate it with text guidance hierarchically. WeakCLIP provides an effective and parameter-efficient way to transfer CLIP knowledge to refine CAM. Extensive experiments demonstrate that WeakCLIP achieves the state-of-the-art WSSS performance on standard benchmarks, i.e., 74.0% mIoU on the <i>val</i> set of PASCAL VOC 2012 and 46.1% mIoU on the <i>val</i> set of COCO 2014. The source code and model checkpoints are released at https://github.com/hustvl/WeakCLIP.</p>","PeriodicalId":11,"journal":{"name":"ACS Chemical Biology","volume":"21 1","pages":""},"PeriodicalIF":3.5000,"publicationDate":"2024-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"ACS Chemical Biology","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.1007/s11263-024-02224-2","RegionNum":2,"RegionCategory":"生物学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"BIOCHEMISTRY & MOLECULAR BIOLOGY","Score":null,"Total":0}
引用次数: 0

Abstract

Contrastive language and image pre-training (CLIP) achieves great success in various computer vision tasks and also presents an opportune avenue for enhancing weakly-supervised image understanding with its large-scale pre-trained knowledge. As an effective way to reduce the reliance on pixel-level human-annotated labels, weakly-supervised semantic segmentation (WSSS) aims to refine the class activation map (CAM) and produce high-quality pseudo masks. Weakly-supervised semantic segmentation (WSSS) aims to refine the class activation map (CAM) as pseudo masks, but heavily relies on inductive biases like hand-crafted priors and digital image processing methods. For the vision-language pre-trained model, i.e. CLIP, we propose a novel text-to-pixel matching paradigm for WSSS. However, directly applying CLIP to WSSS is challenging due to three critical problems: (1) the task gap between contrastive pre-training and WSSS CAM refinement, (2) lacking text-to-pixel modeling to fully utilize the pre-trained knowledge, and (3) the insufficient details owning to the \(\frac{1}{16}\) down-sampling resolution of ViT. Thus, we propose WeakCLIP to address the problems and leverage the pre-trained knowledge from CLIP to WSSS. Specifically, we first address the task gap by proposing a pyramid adapter and learnable prompts to extract WSSS-specific representation. We then design a co-attention matching module to model text-to-pixel relationships. Finally, the pyramid adapter and text-guided decoder are introduced to gather multi-level information and integrate it with text guidance hierarchically. WeakCLIP provides an effective and parameter-efficient way to transfer CLIP knowledge to refine CAM. Extensive experiments demonstrate that WeakCLIP achieves the state-of-the-art WSSS performance on standard benchmarks, i.e., 74.0% mIoU on the val set of PASCAL VOC 2012 and 46.1% mIoU on the val set of COCO 2014. The source code and model checkpoints are released at https://github.com/hustvl/WeakCLIP.

Abstract Image

查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
WeakCLIP:针对弱监督语义分割调整 CLIP
对比语言和图像预训练(CLIP)在各种计算机视觉任务中取得了巨大成功,同时也为利用其大规模预训练知识增强弱监督图像理解提供了一个合适的途径。弱监督语义分割(WSSS)旨在完善类激活图(CAM)并生成高质量的伪掩码,是减少对像素级人类注释标签依赖的有效方法。弱监督语义分割(WSSS)旨在提炼类激活图(CAM)作为伪掩码,但它在很大程度上依赖于手工制作的先验和数字图像处理方法等归纳偏差。对于视觉语言预训练模型,即 CLIP,我们为 WSSS 提出了一种新颖的文本到像素匹配范例。然而,由于以下三个关键问题,将 CLIP 直接应用于 WSSS 具有挑战性:(1) 对比预训练与 WSSS CAM 精炼之间存在任务差距;(2) 缺乏文本到像素建模以充分利用预训练知识;(3) ViT 的下采样分辨率导致细节不足。因此,我们提出了 WeakCLIP 来解决这些问题,并将来自 CLIP 的预训练知识用于 WSSS。具体来说,我们首先通过提出金字塔适配器和可学习提示来提取 WSSS 特定表征,从而解决任务差距问题。然后,我们设计了一个共同关注匹配模块来模拟文本到像素的关系。最后,我们引入了金字塔适配器和文本引导解码器,以收集多层次信息,并将其与文本引导分层整合。WeakCLIP 提供了一种有效且参数效率高的方法,将 CLIP 知识转移到改进 CAM 中。广泛的实验证明,WeakCLIP 在标准基准上达到了最先进的 WSSS 性能,即在 PASCAL VOC 2012 的 Val 集上达到 74.0% mIoU,在 COCO 2014 的 Val 集上达到 46.1% mIoU。源代码和模型检查点发布于 https://github.com/hustvl/WeakCLIP。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
ACS Chemical Biology
ACS Chemical Biology 生物-生化与分子生物学
CiteScore
7.50
自引率
5.00%
发文量
353
审稿时长
3.3 months
期刊介绍: ACS Chemical Biology provides an international forum for the rapid communication of research that broadly embraces the interface between chemistry and biology. The journal also serves as a forum to facilitate the communication between biologists and chemists that will translate into new research opportunities and discoveries. Results will be published in which molecular reasoning has been used to probe questions through in vitro investigations, cell biological methods, or organismic studies. We welcome mechanistic studies on proteins, nucleic acids, sugars, lipids, and nonbiological polymers. The journal serves a large scientific community, exploring cellular function from both chemical and biological perspectives. It is understood that submitted work is based upon original results and has not been published previously.
期刊最新文献
The TRIM33 Bromodomain Recognizes Histone Lysine Lactylation. Issue Editorial Masthead Issue Publication Information Introducing Our Authors p300/CBP KATs Are Critical for Maturation and Differentiation of Adult Neural Progenitors.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1