视觉语言预训练:基础、最新进展和未来趋势

IF 3.8 Q2 COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS Foundations and Trends in Computer Graphics and Vision Pub Date : 2022-10-17 DOI:10.48550/arXiv.2210.09263
Zhe Gan, Linjie Li, Chunyuan Li, Lijuan Wang, Zicheng Liu, Jianfeng Gao
{"title":"视觉语言预训练:基础、最新进展和未来趋势","authors":"Zhe Gan, Linjie Li, Chunyuan Li, Lijuan Wang, Zicheng Liu, Jianfeng Gao","doi":"10.48550/arXiv.2210.09263","DOIUrl":null,"url":null,"abstract":"This paper surveys vision-language pre-training (VLP) methods for multimodal intelligence that have been developed in the last few years. We group these approaches into three categories: ($i$) VLP for image-text tasks, such as image captioning, image-text retrieval, visual question answering, and visual grounding; ($ii$) VLP for core computer vision tasks, such as (open-set) image classification, object detection, and segmentation; and ($iii$) VLP for video-text tasks, such as video captioning, video-text retrieval, and video question answering. For each category, we present a comprehensive review of state-of-the-art methods, and discuss the progress that has been made and challenges still being faced, using specific systems and models as case studies. In addition, for each category, we discuss advanced topics being actively explored in the research community, such as big foundation models, unified modeling, in-context few-shot learning, knowledge, robustness, and computer vision in the wild, to name a few.","PeriodicalId":45662,"journal":{"name":"Foundations and Trends in Computer Graphics and Vision","volume":"9 1","pages":"163-352"},"PeriodicalIF":3.8000,"publicationDate":"2022-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"70","resultStr":"{\"title\":\"Vision-Language Pre-training: Basics, Recent Advances, and Future Trends\",\"authors\":\"Zhe Gan, Linjie Li, Chunyuan Li, Lijuan Wang, Zicheng Liu, Jianfeng Gao\",\"doi\":\"10.48550/arXiv.2210.09263\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"This paper surveys vision-language pre-training (VLP) methods for multimodal intelligence that have been developed in the last few years. We group these approaches into three categories: ($i$) VLP for image-text tasks, such as image captioning, image-text retrieval, visual question answering, and visual grounding; ($ii$) VLP for core computer vision tasks, such as (open-set) image classification, object detection, and segmentation; and ($iii$) VLP for video-text tasks, such as video captioning, video-text retrieval, and video question answering. For each category, we present a comprehensive review of state-of-the-art methods, and discuss the progress that has been made and challenges still being faced, using specific systems and models as case studies. In addition, for each category, we discuss advanced topics being actively explored in the research community, such as big foundation models, unified modeling, in-context few-shot learning, knowledge, robustness, and computer vision in the wild, to name a few.\",\"PeriodicalId\":45662,\"journal\":{\"name\":\"Foundations and Trends in Computer Graphics and Vision\",\"volume\":\"9 1\",\"pages\":\"163-352\"},\"PeriodicalIF\":3.8000,\"publicationDate\":\"2022-10-17\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"70\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Foundations and Trends in Computer Graphics and Vision\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.48550/arXiv.2210.09263\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Foundations and Trends in Computer Graphics and Vision","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.48550/arXiv.2210.09263","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS","Score":null,"Total":0}
引用次数: 70

摘要

本文综述了近年来发展起来的多模态智能的视觉语言预训练方法。我们将这些方法分为三类:(i) VLP用于图像-文本任务,如图像字幕、图像-文本检索、视觉问答和视觉基础;($ii$) VLP用于核心计算机视觉任务,如(开集)图像分类、目标检测和分割;($iii$) VLP用于视频文本任务,如视频字幕、视频文本检索和视频问答。对于每个类别,我们都对最先进的方法进行了全面的回顾,并讨论了已经取得的进展和仍然面临的挑战,使用特定的系统和模型作为案例研究。此外,对于每个类别,我们讨论了在研究界正在积极探索的高级主题,例如大基础模型,统一建模,上下文少量学习,知识,鲁棒性和野外计算机视觉,等等。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Vision-Language Pre-training: Basics, Recent Advances, and Future Trends
This paper surveys vision-language pre-training (VLP) methods for multimodal intelligence that have been developed in the last few years. We group these approaches into three categories: ($i$) VLP for image-text tasks, such as image captioning, image-text retrieval, visual question answering, and visual grounding; ($ii$) VLP for core computer vision tasks, such as (open-set) image classification, object detection, and segmentation; and ($iii$) VLP for video-text tasks, such as video captioning, video-text retrieval, and video question answering. For each category, we present a comprehensive review of state-of-the-art methods, and discuss the progress that has been made and challenges still being faced, using specific systems and models as case studies. In addition, for each category, we discuss advanced topics being actively explored in the research community, such as big foundation models, unified modeling, in-context few-shot learning, knowledge, robustness, and computer vision in the wild, to name a few.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Foundations and Trends in Computer Graphics and Vision
Foundations and Trends in Computer Graphics and Vision COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS-
CiteScore
31.20
自引率
0.00%
发文量
1
期刊介绍: The growth in all aspects of research in the last decade has led to a multitude of new publications and an exponential increase in published research. Finding a way through the excellent existing literature and keeping up to date has become a major time-consuming problem. Electronic publishing has given researchers instant access to more articles than ever before. But which articles are the essential ones that should be read to understand and keep abreast with developments of any topic? To address this problem Foundations and Trends® in Computer Graphics and Vision publishes high-quality survey and tutorial monographs of the field. Each issue of Foundations and Trends® in Computer Graphics and Vision comprises a 50-100 page monograph written by research leaders in the field. Monographs that give tutorial coverage of subjects, research retrospectives as well as survey papers that offer state-of-the-art reviews fall within the scope of the journal.
期刊最新文献
Semantic Image Segmentation: Two Decades of Research Learning-based Visual Compression Computational Imaging Through Atmospheric Turbulence Vision-Language Pre-training: Basics, Recent Advances, and Future Trends Towards Better User Studies in Computer Graphics and Vision
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1