Surya M Ravishankar, Ryosuke Tsumura, John W Hardin, Beatrice Hoffmann, Ziming Zhang, Haichong K Zhang
{"title":"基于解剖特征的深度卷积神经网络肺超声图像质量评价。","authors":"Surya M Ravishankar, Ryosuke Tsumura, John W Hardin, Beatrice Hoffmann, Ziming Zhang, Haichong K Zhang","doi":"10.1109/ius52206.2021.9593662","DOIUrl":null,"url":null,"abstract":"<p><p>Lung ultrasound (LUS) has been used for point-of-care diagnosis of respiratory diseases including COVID-19, with advantages such as low cost, safety, absence of radiation, and portability. The scanning procedure and assessment of LUS are highly operator-dependent, and the appearance of LUS images varies with the probe's position, orientation, and contact force. Karamalis et al. introduced the concept of ultrasound confidence maps based on random walks to assess the ultrasound image quality algorithmically by estimating the per-pixel confidence in the image data. However, these confidence maps do not consider the clinical context of an image, such as anatomical feature visibility and diagnosability. This work proposes a deep convolutional network that detects important anatomical features in an LUS image to quantify its clinical context. This work introduces an Anatomical Feature-based Confidence (AFC) Map, quantifying an LUS image's clinical context based on the visible anatomical features. We developed two U-net models, each segmenting one of the two classes crucial for analyzing an LUS image, namely 1) Bright Features: Pleural and Rib Lines and 2) Dark Features: Rib Shadows. Each model takes the LUS image as input and outputs the segmented regions with confidence values for the corresponding class. The evaluation dataset consists of ultrasound images extracted from videos of two sub-regions of the chest above the anterior axial line from three human subjects. The feature segmentation models achieved an average Dice score of 0.72 on the model's output for the testing data. The average of non-zero confidence values in all the pixels was calculated and compared against the image quality scores. The confidence values were different between different image quality scores. The results demonstrated the relevance of using an AFC Map to quantify the clinical context of an LUS image.</p>","PeriodicalId":73288,"journal":{"name":"IEEE International Ultrasonics Symposium : [proceedings]. IEEE International Ultrasonics Symposium","volume":"2021 ","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2021-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9373065/pdf/nihms-1822596.pdf","citationCount":"1","resultStr":"{\"title\":\"Anatomical Feature-Based Lung Ultrasound Image Quality Assessment Using Deep Convolutional Neural Network.\",\"authors\":\"Surya M Ravishankar, Ryosuke Tsumura, John W Hardin, Beatrice Hoffmann, Ziming Zhang, Haichong K Zhang\",\"doi\":\"10.1109/ius52206.2021.9593662\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><p>Lung ultrasound (LUS) has been used for point-of-care diagnosis of respiratory diseases including COVID-19, with advantages such as low cost, safety, absence of radiation, and portability. The scanning procedure and assessment of LUS are highly operator-dependent, and the appearance of LUS images varies with the probe's position, orientation, and contact force. Karamalis et al. introduced the concept of ultrasound confidence maps based on random walks to assess the ultrasound image quality algorithmically by estimating the per-pixel confidence in the image data. However, these confidence maps do not consider the clinical context of an image, such as anatomical feature visibility and diagnosability. This work proposes a deep convolutional network that detects important anatomical features in an LUS image to quantify its clinical context. This work introduces an Anatomical Feature-based Confidence (AFC) Map, quantifying an LUS image's clinical context based on the visible anatomical features. We developed two U-net models, each segmenting one of the two classes crucial for analyzing an LUS image, namely 1) Bright Features: Pleural and Rib Lines and 2) Dark Features: Rib Shadows. Each model takes the LUS image as input and outputs the segmented regions with confidence values for the corresponding class. The evaluation dataset consists of ultrasound images extracted from videos of two sub-regions of the chest above the anterior axial line from three human subjects. The feature segmentation models achieved an average Dice score of 0.72 on the model's output for the testing data. The average of non-zero confidence values in all the pixels was calculated and compared against the image quality scores. The confidence values were different between different image quality scores. The results demonstrated the relevance of using an AFC Map to quantify the clinical context of an LUS image.</p>\",\"PeriodicalId\":73288,\"journal\":{\"name\":\"IEEE International Ultrasonics Symposium : [proceedings]. IEEE International Ultrasonics Symposium\",\"volume\":\"2021 \",\"pages\":\"\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2021-09-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9373065/pdf/nihms-1822596.pdf\",\"citationCount\":\"1\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE International Ultrasonics Symposium : [proceedings]. IEEE International Ultrasonics Symposium\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ius52206.2021.9593662\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"2021/11/13 0:00:00\",\"PubModel\":\"Epub\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE International Ultrasonics Symposium : [proceedings]. IEEE International Ultrasonics Symposium","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ius52206.2021.9593662","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2021/11/13 0:00:00","PubModel":"Epub","JCR":"","JCRName":"","Score":null,"Total":0}
Anatomical Feature-Based Lung Ultrasound Image Quality Assessment Using Deep Convolutional Neural Network.
Lung ultrasound (LUS) has been used for point-of-care diagnosis of respiratory diseases including COVID-19, with advantages such as low cost, safety, absence of radiation, and portability. The scanning procedure and assessment of LUS are highly operator-dependent, and the appearance of LUS images varies with the probe's position, orientation, and contact force. Karamalis et al. introduced the concept of ultrasound confidence maps based on random walks to assess the ultrasound image quality algorithmically by estimating the per-pixel confidence in the image data. However, these confidence maps do not consider the clinical context of an image, such as anatomical feature visibility and diagnosability. This work proposes a deep convolutional network that detects important anatomical features in an LUS image to quantify its clinical context. This work introduces an Anatomical Feature-based Confidence (AFC) Map, quantifying an LUS image's clinical context based on the visible anatomical features. We developed two U-net models, each segmenting one of the two classes crucial for analyzing an LUS image, namely 1) Bright Features: Pleural and Rib Lines and 2) Dark Features: Rib Shadows. Each model takes the LUS image as input and outputs the segmented regions with confidence values for the corresponding class. The evaluation dataset consists of ultrasound images extracted from videos of two sub-regions of the chest above the anterior axial line from three human subjects. The feature segmentation models achieved an average Dice score of 0.72 on the model's output for the testing data. The average of non-zero confidence values in all the pixels was calculated and compared against the image quality scores. The confidence values were different between different image quality scores. The results demonstrated the relevance of using an AFC Map to quantify the clinical context of an LUS image.