{"title":"为解剖结构和病变建立通用计算机断层扫描图像分割模型","authors":"Xi Ouyang, Dongdong Gu, Xuejian Li, Wenqi Zhou, Qianqian Chen, Yiqiang Zhan, Xiang Sean Zhou, Feng Shi, Zhong Xue, Dinggang Shen","doi":"10.1038/s44172-024-00287-0","DOIUrl":null,"url":null,"abstract":"Numerous deep-learning models have been developed using task-specific data, but they ignore the inherent connections among different tasks. By jointly learning a wide range of segmentation tasks, we prove that a general medical image segmentation model can improve segmentation performance for computerized tomography (CT) volumes. The proposed general CT image segmentation (gCIS) model utilizes a common transformer-based encoder for all tasks and incorporates automatic pathway modules for task prompt-based decoding. It is trained on one of the largest datasets, comprising 36,419 CT scans and 83 tasks. gCIS can automatically perform various segmentation tasks using automatic pathway modules of decoding networks through text prompt inputs, achieving an average Dice coefficient of 82.84%. Furthermore, the proposed automatic pathway routing mechanism allows for parameter pruning of the network during deployment, and gCIS can also be quickly adapted to unseen tasks with minimal training samples while maintaining great performance. Xi Ouyang et al. developed a unified machine-learning model for multi-task segmentation in computed tomography images. After collating a large dataset composed of over 35K scans, the model presented superior results compared to the state-of-the-art in various tasks.","PeriodicalId":72644,"journal":{"name":"Communications engineering","volume":" ","pages":"1-11"},"PeriodicalIF":0.0000,"publicationDate":"2024-10-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.nature.com/articles/s44172-024-00287-0.pdf","citationCount":"0","resultStr":"{\"title\":\"Towards a general computed tomography image segmentation model for anatomical structures and lesions\",\"authors\":\"Xi Ouyang, Dongdong Gu, Xuejian Li, Wenqi Zhou, Qianqian Chen, Yiqiang Zhan, Xiang Sean Zhou, Feng Shi, Zhong Xue, Dinggang Shen\",\"doi\":\"10.1038/s44172-024-00287-0\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Numerous deep-learning models have been developed using task-specific data, but they ignore the inherent connections among different tasks. By jointly learning a wide range of segmentation tasks, we prove that a general medical image segmentation model can improve segmentation performance for computerized tomography (CT) volumes. The proposed general CT image segmentation (gCIS) model utilizes a common transformer-based encoder for all tasks and incorporates automatic pathway modules for task prompt-based decoding. It is trained on one of the largest datasets, comprising 36,419 CT scans and 83 tasks. gCIS can automatically perform various segmentation tasks using automatic pathway modules of decoding networks through text prompt inputs, achieving an average Dice coefficient of 82.84%. Furthermore, the proposed automatic pathway routing mechanism allows for parameter pruning of the network during deployment, and gCIS can also be quickly adapted to unseen tasks with minimal training samples while maintaining great performance. Xi Ouyang et al. developed a unified machine-learning model for multi-task segmentation in computed tomography images. After collating a large dataset composed of over 35K scans, the model presented superior results compared to the state-of-the-art in various tasks.\",\"PeriodicalId\":72644,\"journal\":{\"name\":\"Communications engineering\",\"volume\":\" \",\"pages\":\"1-11\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-10-14\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://www.nature.com/articles/s44172-024-00287-0.pdf\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Communications engineering\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://www.nature.com/articles/s44172-024-00287-0\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Communications engineering","FirstCategoryId":"1085","ListUrlMain":"https://www.nature.com/articles/s44172-024-00287-0","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Towards a general computed tomography image segmentation model for anatomical structures and lesions
Numerous deep-learning models have been developed using task-specific data, but they ignore the inherent connections among different tasks. By jointly learning a wide range of segmentation tasks, we prove that a general medical image segmentation model can improve segmentation performance for computerized tomography (CT) volumes. The proposed general CT image segmentation (gCIS) model utilizes a common transformer-based encoder for all tasks and incorporates automatic pathway modules for task prompt-based decoding. It is trained on one of the largest datasets, comprising 36,419 CT scans and 83 tasks. gCIS can automatically perform various segmentation tasks using automatic pathway modules of decoding networks through text prompt inputs, achieving an average Dice coefficient of 82.84%. Furthermore, the proposed automatic pathway routing mechanism allows for parameter pruning of the network during deployment, and gCIS can also be quickly adapted to unseen tasks with minimal training samples while maintaining great performance. Xi Ouyang et al. developed a unified machine-learning model for multi-task segmentation in computed tomography images. After collating a large dataset composed of over 35K scans, the model presented superior results compared to the state-of-the-art in various tasks.