Exploring Vision-Language Foundation Model for Novel Object Captioning

IF 11.1 1区 工程技术 Q1 ENGINEERING, ELECTRICAL & ELECTRONIC IEEE Transactions on Circuits and Systems for Video Technology Pub Date : 2024-08-30 DOI:10.1109/TCSVT.2024.3452437
Jianjie Luo;Yehao Li;Yingwei Pan;Ting Yao;Jianlin Feng;Hongyang Chao;Tao Mei
{"title":"Exploring Vision-Language Foundation Model for Novel Object Captioning","authors":"Jianjie Luo;Yehao Li;Yingwei Pan;Ting Yao;Jianlin Feng;Hongyang Chao;Tao Mei","doi":"10.1109/TCSVT.2024.3452437","DOIUrl":null,"url":null,"abstract":"It is always well believed that pre-trained vision-language foundation models (e.g., CLIP) would substantially facilitate vision-language tasks. Nevertheless, there has been less evidence in support of the idea on describing novel objects in images. In this paper, we propose the Novel Object Transformer with CLIP (NOTC), a Transformer-based model that innovatively exploits the powerful vision-language representation ability of CLIP to enhance novel object captioning model’s training and sentence decoding processes. Technically, given the primary bag-of-objects extracted by Faster R-CNN, NOTC first capitalize on an object distiller module to emphasize the most salient objects and infer the missing novel ones. The refined object words are additionally fed into the object-centric word predictor to generate sentence word-by-word. During training, we design a CLIP-based self-critical sequence training paradigm to select visually-grounded sampled sentence with higher CLIP score reward, which enables a joint training process of captioning model over out-domain training images with novel objects. Moreover, at inference, a new CLIP beam search algorithm is devised to enforce the existence of novel objects and encourage the partial word sequences with higher CLIP scores, thereby decoding both visually-grounded and comprehensive sentences. Extensive experiments are conducted on held-out COCO and nocaps datasets, and competitive performances are reported when compared to state-of-the-art approaches.","PeriodicalId":13082,"journal":{"name":"IEEE Transactions on Circuits and Systems for Video Technology","volume":"35 1","pages":"91-102"},"PeriodicalIF":11.1000,"publicationDate":"2024-08-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Circuits and Systems for Video Technology","FirstCategoryId":"5","ListUrlMain":"https://ieeexplore.ieee.org/document/10659916/","RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ENGINEERING, ELECTRICAL & ELECTRONIC","Score":null,"Total":0}
引用次数: 0

Abstract

It is always well believed that pre-trained vision-language foundation models (e.g., CLIP) would substantially facilitate vision-language tasks. Nevertheless, there has been less evidence in support of the idea on describing novel objects in images. In this paper, we propose the Novel Object Transformer with CLIP (NOTC), a Transformer-based model that innovatively exploits the powerful vision-language representation ability of CLIP to enhance novel object captioning model’s training and sentence decoding processes. Technically, given the primary bag-of-objects extracted by Faster R-CNN, NOTC first capitalize on an object distiller module to emphasize the most salient objects and infer the missing novel ones. The refined object words are additionally fed into the object-centric word predictor to generate sentence word-by-word. During training, we design a CLIP-based self-critical sequence training paradigm to select visually-grounded sampled sentence with higher CLIP score reward, which enables a joint training process of captioning model over out-domain training images with novel objects. Moreover, at inference, a new CLIP beam search algorithm is devised to enforce the existence of novel objects and encourage the partial word sequences with higher CLIP scores, thereby decoding both visually-grounded and comprehensive sentences. Extensive experiments are conducted on held-out COCO and nocaps datasets, and competitive performances are reported when compared to state-of-the-art approaches.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
探索新颖物体字幕的视觉语言基础模型
人们一直认为,预训练的视觉语言基础模型(例如CLIP)将极大地促进视觉语言任务。然而,支持在图像中描述新物体的观点的证据较少。本文提出了一种基于Transformer的新型对象转换器(NOTC)模型,该模型创新性地利用了CLIP强大的视觉语言表征能力来增强新型对象字幕模型的训练和句子解码过程。从技术上讲,给出Faster R-CNN提取的主要对象袋,NOTC首先利用对象蒸馏器模块来强调最突出的对象并推断缺失的新对象。精炼后的目标词被输入到以目标为中心的词预测器中,逐字生成句子。在训练过程中,我们设计了一个基于CLIP的自批判序列训练范式,选择具有较高CLIP分数奖励的视觉基础采样句子,实现了字幕模型在具有新对象的域外训练图像上的联合训练过程。此外,在推理方面,我们设计了一种新的CLIP束搜索算法来强化新对象的存在,并鼓励具有较高CLIP分数的部分词序列,从而解码视觉基础和综合句子。在hold -out COCO和nocaps数据集上进行了广泛的实验,与最先进的方法相比,报告了具有竞争力的性能。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
CiteScore
13.80
自引率
27.40%
发文量
660
审稿时长
5 months
期刊介绍: The IEEE Transactions on Circuits and Systems for Video Technology (TCSVT) is dedicated to covering all aspects of video technologies from a circuits and systems perspective. We encourage submissions of general, theoretical, and application-oriented papers related to image and video acquisition, representation, presentation, and display. Additionally, we welcome contributions in areas such as processing, filtering, and transforms; analysis and synthesis; learning and understanding; compression, transmission, communication, and networking; as well as storage, retrieval, indexing, and search. Furthermore, papers focusing on hardware and software design and implementation are highly valued. Join us in advancing the field of video technology through innovative research and insights.
期刊最新文献
IEEE Circuits and Systems Society Information IEEE Circuits and Systems Society Information IEEE Circuits and Systems Society Information 2025 Index IEEE Transactions on Circuits and Systems for Video Technology IEEE Circuits and Systems Society Information
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1