CLIP-SP: Vision-language model with adaptive prompting for scene parsing

IF 17.3 3区 计算机科学 Q1 COMPUTER SCIENCE, SOFTWARE ENGINEERING Computational Visual Media Pub Date : 2024-08-27 DOI:10.1007/s41095-024-0430-4
Jiaao Li, Yixiang Huang, Ming Wu, Bin Zhang, Xu Ji, Chuang Zhang
{"title":"CLIP-SP: Vision-language model with adaptive prompting for scene parsing","authors":"Jiaao Li, Yixiang Huang, Ming Wu, Bin Zhang, Xu Ji, Chuang Zhang","doi":"10.1007/s41095-024-0430-4","DOIUrl":null,"url":null,"abstract":"<p>We present a novel framework, CLIP-SP, and a novel adaptive prompt method to leverage pre-trained knowledge from CLIP for scene parsing. Our approach addresses the limitations of DenseCLIP, which demonstrates the superior image segmentation provided by CLIP pre-trained models over ImageNet pre-trained models, but struggles with rough pixel-text score maps for complex scene parsing. We argue that, as they contain all textual information in a dataset, the pixel-text score maps, i.e., dense prompts, are inevitably mixed with noise. To overcome this challenge, we propose a two-step method. Firstly, we extract visual and language features and perform multi-label classification to identify the most likely categories in the input images. Secondly, based on the top-<i>k</i> categories and confidence scores, our method generates scene tokens which can be treated as adaptive prompts for implicit modeling of scenes, and incorporates them into the visual features fed into the decoder for segmentation. Our method imposes a constraint on prompts and suppresses the probability of irrelevant categories appearing in the scene parsing results. Our method achieves competitive performance, limited by the available visual-language pre-trained models. Our CLIP-SP performs 1.14% better (in terms of mIoU) than DenseCLIP on ADE20K, using a ResNet-50 backbone.\n</p>","PeriodicalId":37301,"journal":{"name":"Computational Visual Media","volume":null,"pages":null},"PeriodicalIF":17.3000,"publicationDate":"2024-08-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Computational Visual Media","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.1007/s41095-024-0430-4","RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, SOFTWARE ENGINEERING","Score":null,"Total":0}
引用次数: 0

Abstract

We present a novel framework, CLIP-SP, and a novel adaptive prompt method to leverage pre-trained knowledge from CLIP for scene parsing. Our approach addresses the limitations of DenseCLIP, which demonstrates the superior image segmentation provided by CLIP pre-trained models over ImageNet pre-trained models, but struggles with rough pixel-text score maps for complex scene parsing. We argue that, as they contain all textual information in a dataset, the pixel-text score maps, i.e., dense prompts, are inevitably mixed with noise. To overcome this challenge, we propose a two-step method. Firstly, we extract visual and language features and perform multi-label classification to identify the most likely categories in the input images. Secondly, based on the top-k categories and confidence scores, our method generates scene tokens which can be treated as adaptive prompts for implicit modeling of scenes, and incorporates them into the visual features fed into the decoder for segmentation. Our method imposes a constraint on prompts and suppresses the probability of irrelevant categories appearing in the scene parsing results. Our method achieves competitive performance, limited by the available visual-language pre-trained models. Our CLIP-SP performs 1.14% better (in terms of mIoU) than DenseCLIP on ADE20K, using a ResNet-50 backbone.

Abstract Image

查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
CLIP-SP:用于场景解析的具有自适应提示功能的视觉语言模型
我们提出了一个新颖的框架 CLIP-SP,以及一种新颖的自适应提示方法,以利用 CLIP 的预训练知识进行场景解析。我们的方法解决了 DenseCLIP 的局限性,DenseCLIP 展示了 CLIP 预训练模型比 ImageNet 预训练模型更优越的图像分割能力,但在复杂场景解析的粗略像素-文本分数图方面却存在困难。我们认为,由于像素-文本分数图(即密集提示)包含了数据集中的所有文本信息,因此不可避免地会掺杂噪音。为了克服这一难题,我们提出了一种分两步走的方法。首先,我们提取视觉和语言特征,并进行多标签分类,以识别输入图像中最有可能出现的类别。其次,根据前 k 个类别和置信度得分,我们的方法会生成场景标记,这些标记可被视为场景隐式建模的自适应提示,并将其纳入视觉特征,输入解码器进行分割。我们的方法对提示进行了限制,并抑制了场景解析结果中出现无关类别的概率。受限于现有的视觉语言预训练模型,我们的方法取得了具有竞争力的性能。在使用 ResNet-50 骨干网的 ADE20K 上,我们的 CLIP-SP 性能比 DenseCLIP 高 1.14%(以 mIoU 计)。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
Computational Visual Media
Computational Visual Media Computer Science-Computer Graphics and Computer-Aided Design
CiteScore
16.90
自引率
5.80%
发文量
243
审稿时长
6 weeks
期刊介绍: Computational Visual Media is a peer-reviewed open access journal. It publishes original high-quality research papers and significant review articles on novel ideas, methods, and systems relevant to visual media. Computational Visual Media publishes articles that focus on, but are not limited to, the following areas: • Editing and composition of visual media • Geometric computing for images and video • Geometry modeling and processing • Machine learning for visual media • Physically based animation • Realistic rendering • Recognition and understanding of visual media • Visual computing for robotics • Visualization and visual analytics Other interdisciplinary research into visual media that combines aspects of computer graphics, computer vision, image and video processing, geometric computing, and machine learning is also within the journal''s scope. This is an open access journal, published quarterly by Tsinghua University Press and Springer. The open access fees (article-processing charges) are fully sponsored by Tsinghua University, China. Authors can publish in the journal without any additional charges.
期刊最新文献
TrafPS: A shapley-based visual analytics approach to interpret traffic CLIP-Flow: Decoding images encoded in CLIP space CLIP-SP: Vision-language model with adaptive prompting for scene parsing SGformer: Boosting transformers for indoor lighting estimation from a single image Central similarity consistency hashing for asymmetric image retrieval
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1