{"title":"利用多尺度和情境感知神经网络进行腹部多器官分割","authors":"Yuhan Song, Armagan Elibol , Nak Young Chong","doi":"10.1016/j.ifacsc.2024.100249","DOIUrl":null,"url":null,"abstract":"<div><p>Recent advancements in AI have significantly enhanced smart diagnostic methods, bringing us closer to achieving end-to-end diagnosis. Ultrasound image segmentation plays a crucial role in this diagnostic process. An accurate and robust segmentation model accelerates the process and reduces the burden of sonographers. In contrast to previous research, we consider two inherent features of ultrasound images: (1) different organs and tissues vary in spatial sizes, and (2) the anatomical structures inside the human body form a relatively constant spatial relationship. Based on those two ideas, we proposed two segmentation models combining multi-scale convolution neural network backbones and a spatial context feature extractor. We discuss two backbone structures to extract anatomical structures of different scales: the Feature Pyramid Network (FPN) backbone and the Trident Network backbone. Moreover, we show how Spatial Recurrent Neural Network (SRNN) is implemented to extract the spatial context features in abdominal ultrasound images. Our proposed model has achieved dice coefficient score of 0.919 and 0.931, respectively.</p></div>","PeriodicalId":29926,"journal":{"name":"IFAC Journal of Systems and Control","volume":"27 ","pages":"Article 100249"},"PeriodicalIF":1.8000,"publicationDate":"2024-02-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Abdominal multi-organ segmentation using multi-scale and context-aware neural networks\",\"authors\":\"Yuhan Song, Armagan Elibol , Nak Young Chong\",\"doi\":\"10.1016/j.ifacsc.2024.100249\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><p>Recent advancements in AI have significantly enhanced smart diagnostic methods, bringing us closer to achieving end-to-end diagnosis. Ultrasound image segmentation plays a crucial role in this diagnostic process. An accurate and robust segmentation model accelerates the process and reduces the burden of sonographers. In contrast to previous research, we consider two inherent features of ultrasound images: (1) different organs and tissues vary in spatial sizes, and (2) the anatomical structures inside the human body form a relatively constant spatial relationship. Based on those two ideas, we proposed two segmentation models combining multi-scale convolution neural network backbones and a spatial context feature extractor. We discuss two backbone structures to extract anatomical structures of different scales: the Feature Pyramid Network (FPN) backbone and the Trident Network backbone. Moreover, we show how Spatial Recurrent Neural Network (SRNN) is implemented to extract the spatial context features in abdominal ultrasound images. Our proposed model has achieved dice coefficient score of 0.919 and 0.931, respectively.</p></div>\",\"PeriodicalId\":29926,\"journal\":{\"name\":\"IFAC Journal of Systems and Control\",\"volume\":\"27 \",\"pages\":\"Article 100249\"},\"PeriodicalIF\":1.8000,\"publicationDate\":\"2024-02-27\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IFAC Journal of Systems and Control\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S2468601824000105\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q3\",\"JCRName\":\"AUTOMATION & CONTROL SYSTEMS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IFAC Journal of Systems and Control","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2468601824000105","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"AUTOMATION & CONTROL SYSTEMS","Score":null,"Total":0}
Abdominal multi-organ segmentation using multi-scale and context-aware neural networks
Recent advancements in AI have significantly enhanced smart diagnostic methods, bringing us closer to achieving end-to-end diagnosis. Ultrasound image segmentation plays a crucial role in this diagnostic process. An accurate and robust segmentation model accelerates the process and reduces the burden of sonographers. In contrast to previous research, we consider two inherent features of ultrasound images: (1) different organs and tissues vary in spatial sizes, and (2) the anatomical structures inside the human body form a relatively constant spatial relationship. Based on those two ideas, we proposed two segmentation models combining multi-scale convolution neural network backbones and a spatial context feature extractor. We discuss two backbone structures to extract anatomical structures of different scales: the Feature Pyramid Network (FPN) backbone and the Trident Network backbone. Moreover, we show how Spatial Recurrent Neural Network (SRNN) is implemented to extract the spatial context features in abdominal ultrasound images. Our proposed model has achieved dice coefficient score of 0.919 and 0.931, respectively.