{"title":"基于U-Net的多尺度脑肿瘤分割方法","authors":"Lei Wang, Mingtao Liu, Yunyu Wang, Xianbiao Bai, Mengjie Zhu, Fuchun Zhang","doi":"10.1109/CCISP55629.2022.9974427","DOIUrl":null,"url":null,"abstract":"A accurately segmented tumor region has great significance in assessing the sick person with the conditions. Aiming at the problems that existing deep learning has limited ability to perceive 3D context in medical image segmentation tasks, and the edge information of tumors cannot be well preserved. Therefore, we propose an effective method to improve 3D U-Net model for segmentation. Firstly, adding a multi-scale feature extraction module can extract more receptive fields and improve the adaptability of the model to features of different scales. Secondly, decoding the position attention mechanism is added after the first upsampling, so that more effective global and local details can be extracted. Using the public dataset BraTS 2020 for training and testing, the average dice values of the proposed network model in the overall tumor area, tumor core region and tumor enhancement area reached 88.96%, 86.48% and 84.32%, respectively. From those results, we can see that the improved model has a better segmentation effect in evaluation indexes than basic models.","PeriodicalId":431851,"journal":{"name":"2022 7th International Conference on Communication, Image and Signal Processing (CCISP)","volume":"27 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"A multi-scale method based on U-Net for brain tumor segmentation\",\"authors\":\"Lei Wang, Mingtao Liu, Yunyu Wang, Xianbiao Bai, Mengjie Zhu, Fuchun Zhang\",\"doi\":\"10.1109/CCISP55629.2022.9974427\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"A accurately segmented tumor region has great significance in assessing the sick person with the conditions. Aiming at the problems that existing deep learning has limited ability to perceive 3D context in medical image segmentation tasks, and the edge information of tumors cannot be well preserved. Therefore, we propose an effective method to improve 3D U-Net model for segmentation. Firstly, adding a multi-scale feature extraction module can extract more receptive fields and improve the adaptability of the model to features of different scales. Secondly, decoding the position attention mechanism is added after the first upsampling, so that more effective global and local details can be extracted. Using the public dataset BraTS 2020 for training and testing, the average dice values of the proposed network model in the overall tumor area, tumor core region and tumor enhancement area reached 88.96%, 86.48% and 84.32%, respectively. From those results, we can see that the improved model has a better segmentation effect in evaluation indexes than basic models.\",\"PeriodicalId\":431851,\"journal\":{\"name\":\"2022 7th International Conference on Communication, Image and Signal Processing (CCISP)\",\"volume\":\"27 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-11-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2022 7th International Conference on Communication, Image and Signal Processing (CCISP)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/CCISP55629.2022.9974427\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 7th International Conference on Communication, Image and Signal Processing (CCISP)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/CCISP55629.2022.9974427","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
A multi-scale method based on U-Net for brain tumor segmentation
A accurately segmented tumor region has great significance in assessing the sick person with the conditions. Aiming at the problems that existing deep learning has limited ability to perceive 3D context in medical image segmentation tasks, and the edge information of tumors cannot be well preserved. Therefore, we propose an effective method to improve 3D U-Net model for segmentation. Firstly, adding a multi-scale feature extraction module can extract more receptive fields and improve the adaptability of the model to features of different scales. Secondly, decoding the position attention mechanism is added after the first upsampling, so that more effective global and local details can be extracted. Using the public dataset BraTS 2020 for training and testing, the average dice values of the proposed network model in the overall tumor area, tumor core region and tumor enhancement area reached 88.96%, 86.48% and 84.32%, respectively. From those results, we can see that the improved model has a better segmentation effect in evaluation indexes than basic models.