Xiang Li , Like Li , Yuchen Jiang , Hao Wang , Xinyu Qiao , Ting Feng , Hao Luo , Yong Zhao
{"title":"Vision-Language Models in medical image analysis: From simple fusion to general large models","authors":"Xiang Li , Like Li , Yuchen Jiang , Hao Wang , Xinyu Qiao , Ting Feng , Hao Luo , Yong Zhao","doi":"10.1016/j.inffus.2025.102995","DOIUrl":null,"url":null,"abstract":"<div><div>Vision-Language Model (VLM) is a kind of multi-modality deep learning model that aims to fuse visual information with language information to enhance the understanding and analysis of visual content. VLM was originally used to integrate multi-modality information and improve task accuracy. Then, VLM was further developed in combination with zero-shot and few-shot learning to solve the problem of insufficient medical labels. At present, it is the technical basis of the popular medical general large model. Its role is no longer limited to simple information fusion. This paper makes a comprehensive review for the development and application of VLM-based medical image analysis technology. Specifically, this paper first introduces the basic principle and explains the pre-training and fine-tuning framework. Then, the research progress of medical image classification, segmentation, report generation, question answering, image generation, large model and other application scenarios is introduced. This paper also summarizes seven main characteristics of medical image VLM, and analyzes the specific embodiment of these characteristics in each task. Finally, the challenges, potential solutions and future directions in this field are discussed. VLM is still in a rapid development in the field of medical image analysis, and a continuously updated repository of papers and code has been built, it is available at <span><span>https://github.com/XiangQA-Q/VLM-in-MIA</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":50367,"journal":{"name":"Information Fusion","volume":"118 ","pages":"Article 102995"},"PeriodicalIF":14.7000,"publicationDate":"2025-02-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Information Fusion","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1566253525000685","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
Abstract
Vision-Language Model (VLM) is a kind of multi-modality deep learning model that aims to fuse visual information with language information to enhance the understanding and analysis of visual content. VLM was originally used to integrate multi-modality information and improve task accuracy. Then, VLM was further developed in combination with zero-shot and few-shot learning to solve the problem of insufficient medical labels. At present, it is the technical basis of the popular medical general large model. Its role is no longer limited to simple information fusion. This paper makes a comprehensive review for the development and application of VLM-based medical image analysis technology. Specifically, this paper first introduces the basic principle and explains the pre-training and fine-tuning framework. Then, the research progress of medical image classification, segmentation, report generation, question answering, image generation, large model and other application scenarios is introduced. This paper also summarizes seven main characteristics of medical image VLM, and analyzes the specific embodiment of these characteristics in each task. Finally, the challenges, potential solutions and future directions in this field are discussed. VLM is still in a rapid development in the field of medical image analysis, and a continuously updated repository of papers and code has been built, it is available at https://github.com/XiangQA-Q/VLM-in-MIA.
期刊介绍:
Information Fusion serves as a central platform for showcasing advancements in multi-sensor, multi-source, multi-process information fusion, fostering collaboration among diverse disciplines driving its progress. It is the leading outlet for sharing research and development in this field, focusing on architectures, algorithms, and applications. Papers dealing with fundamental theoretical analyses as well as those demonstrating their application to real-world problems will be welcome.