Yihao Zhao, Enhao Zhong, Cuiyun Yuan, Yang Li, Man Zhao, Chunxia Li, Jun Hu, Chenbin Liu
{"title":"TG-LMM: Enhancing Medical Image Segmentation Accuracy through Text-Guided Large Multi-Modal Model","authors":"Yihao Zhao, Enhao Zhong, Cuiyun Yuan, Yang Li, Man Zhao, Chunxia Li, Jun Hu, Chenbin Liu","doi":"arxiv-2409.03412","DOIUrl":null,"url":null,"abstract":"We propose TG-LMM (Text-Guided Large Multi-Modal Model), a novel approach\nthat leverages textual descriptions of organs to enhance segmentation accuracy\nin medical images. Existing medical image segmentation methods face several\nchallenges: current medical automatic segmentation models do not effectively\nutilize prior knowledge, such as descriptions of organ locations; previous\ntext-visual models focus on identifying the target rather than improving the\nsegmentation accuracy; prior models attempt to use prior knowledge to enhance\naccuracy but do not incorporate pre-trained models. To address these issues,\nTG-LMM integrates prior knowledge, specifically expert descriptions of the\nspatial locations of organs, into the segmentation process. Our model utilizes\npre-trained image and text encoders to reduce the number of training parameters\nand accelerate the training process. Additionally, we designed a comprehensive\nimage-text information fusion structure to ensure thorough integration of the\ntwo modalities of data. We evaluated TG-LMM on three authoritative medical\nimage datasets, encompassing the segmentation of various parts of the human\nbody. Our method demonstrated superior performance compared to existing\napproaches, such as MedSAM, SAM and nnUnet.","PeriodicalId":501378,"journal":{"name":"arXiv - PHYS - Medical Physics","volume":"76 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - PHYS - Medical Physics","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2409.03412","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
We propose TG-LMM (Text-Guided Large Multi-Modal Model), a novel approach
that leverages textual descriptions of organs to enhance segmentation accuracy
in medical images. Existing medical image segmentation methods face several
challenges: current medical automatic segmentation models do not effectively
utilize prior knowledge, such as descriptions of organ locations; previous
text-visual models focus on identifying the target rather than improving the
segmentation accuracy; prior models attempt to use prior knowledge to enhance
accuracy but do not incorporate pre-trained models. To address these issues,
TG-LMM integrates prior knowledge, specifically expert descriptions of the
spatial locations of organs, into the segmentation process. Our model utilizes
pre-trained image and text encoders to reduce the number of training parameters
and accelerate the training process. Additionally, we designed a comprehensive
image-text information fusion structure to ensure thorough integration of the
two modalities of data. We evaluated TG-LMM on three authoritative medical
image datasets, encompassing the segmentation of various parts of the human
body. Our method demonstrated superior performance compared to existing
approaches, such as MedSAM, SAM and nnUnet.