Vision-Language Model (VLM) is a kind of multi-modality deep learning model that aims to fuse visual information with language information to enhance the understanding and analysis of visual content. VLM was originally used to integrate multi-modality information and improve task accuracy. Then, VLM was further developed in combination with zero-shot and few-shot learning to solve the problem of insufficient medical labels. At present, it is the technical basis of the popular medical general large model. Its role is no longer limited to simple information fusion. This paper makes a comprehensive review for the development and application of VLM-based medical image analysis technology. Specifically, this paper first introduces the basic principle and explains the pre-training and fine-tuning framework. Then, the research progress of medical image classification, segmentation, report generation, question answering, image generation, large model and other application scenarios is introduced. This paper also summarizes seven main characteristics of medical image VLM, and analyzes the specific embodiment of these characteristics in each task. Finally, the challenges, potential solutions and future directions in this field are discussed. VLM is still in a rapid development in the field of medical image analysis, and a continuously updated repository of papers and code has been built, it is available at https://github.com/XiangQA-Q/VLM-in-MIA.