{"title":"G-SAM: GMM-based segment anything model for medical image classification and segmentation","authors":"Xiaoxiao Liu, Yan Zhao, Shigang Wang, Jian Wei","doi":"10.1007/s10586-024-04679-x","DOIUrl":null,"url":null,"abstract":"<p>In medical imaging, the classification and segmentation of lesions have always been significant topics in clinical research. Different categories of lesions require different treatment strategies, and accurate segmentation helps to assist in improving the effect of the clinical treatment. The Segment anything model (SAM) is an image segmentation model trained on a large-scale dataset with strong prompt segmentation capability, but it cannot be directly applied to the classification and segmentation tasks of medical images due to insufficient training on medical image data. In this paper, we propose a deep learning method for the classification and segmentation of lesions, called GMM-based segment anything model (G-SAM). Prompt-tuning is utilized in the model with the LoRA strategy, and the lesion feature extraction (GFE) module based on the Gaussian mixture model (GMM), is designed to effectively improve the effect of lesion classification and segmentation on the basis of the SAM. Notably, G-SAM exhibits greater sensitivity to early stage of the lesions, aiding in tumor detection and prevention, which holds important clinical value. G-SAM overcomes the limitation that SAM is not suitable for the medical image classification and segmentation tasks due to insufficient training data with minimal cost. Moreover, it enhances classification accuracy and segmentation precision compared to traditional Gaussian model-based methods. The effectiveness of G-SAM in classifying and segmenting lesions is validated on the LIDC dataset, demonstrating advantages over state-of-the-art (SOTA) methods. The study further validates the applicability of G-SAM on large publicly available datasets across three different image modalities, achieving superior performance.</p>","PeriodicalId":501576,"journal":{"name":"Cluster Computing","volume":"35 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-07-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Cluster Computing","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1007/s10586-024-04679-x","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
In medical imaging, the classification and segmentation of lesions have always been significant topics in clinical research. Different categories of lesions require different treatment strategies, and accurate segmentation helps to assist in improving the effect of the clinical treatment. The Segment anything model (SAM) is an image segmentation model trained on a large-scale dataset with strong prompt segmentation capability, but it cannot be directly applied to the classification and segmentation tasks of medical images due to insufficient training on medical image data. In this paper, we propose a deep learning method for the classification and segmentation of lesions, called GMM-based segment anything model (G-SAM). Prompt-tuning is utilized in the model with the LoRA strategy, and the lesion feature extraction (GFE) module based on the Gaussian mixture model (GMM), is designed to effectively improve the effect of lesion classification and segmentation on the basis of the SAM. Notably, G-SAM exhibits greater sensitivity to early stage of the lesions, aiding in tumor detection and prevention, which holds important clinical value. G-SAM overcomes the limitation that SAM is not suitable for the medical image classification and segmentation tasks due to insufficient training data with minimal cost. Moreover, it enhances classification accuracy and segmentation precision compared to traditional Gaussian model-based methods. The effectiveness of G-SAM in classifying and segmenting lesions is validated on the LIDC dataset, demonstrating advantages over state-of-the-art (SOTA) methods. The study further validates the applicability of G-SAM on large publicly available datasets across three different image modalities, achieving superior performance.