Pub Date : 2021-01-01DOI: 10.1177/0161734620973945
Yinmeng Wang, Yanxing Qi, Yuanyuan Wang
Minimum-variance (MV) beamforming, as a typical adaptive beamforming method, has been widely studied in medical ultrasound imaging. This method achieves higher spatial resolution than traditional delay-and-sum (DAS) beamforming by minimizing the total output power while maintaining the desired signals. However, it suffers from high computational complexity due to the heavy calculation load when determining the inverse of the high-dimensional matrix. Low-complexity MV algorithms have been studied recently. In this study, we propose a novel MV beamformer based on orthogonal decomposition of the compounded subspace (CS) of the covariance matrix in synthetic aperture (SA) imaging, which aims to reduce the dimensions of the covariance matrix and therefore reduce the computational complexity. Multiwave spatial smoothing is applied to the echo signals for the accurate estimation of the covariance matrix, and adaptive weight vectors are calculated from the low-dimensional subspace of the original covariance matrix. We conducted simulation, experimental and in vivo studies to verify the performance of the proposed method. The results indicate that the proposed method performs well in maintaining the advantage of high spatial resolution and effectively reduces the computational complexity compared with the standard MV beamformer. In addition, the proposed method shows good robustness against sound velocity errors.
{"title":"A Low-complexity Minimum-variance Beamformer Based on Orthogonal Decomposition of the Compounded Subspace.","authors":"Yinmeng Wang, Yanxing Qi, Yuanyuan Wang","doi":"10.1177/0161734620973945","DOIUrl":"https://doi.org/10.1177/0161734620973945","url":null,"abstract":"<p><p>Minimum-variance (MV) beamforming, as a typical adaptive beamforming method, has been widely studied in medical ultrasound imaging. This method achieves higher spatial resolution than traditional delay-and-sum (DAS) beamforming by minimizing the total output power while maintaining the desired signals. However, it suffers from high computational complexity due to the heavy calculation load when determining the inverse of the high-dimensional matrix. Low-complexity MV algorithms have been studied recently. In this study, we propose a novel MV beamformer based on orthogonal decomposition of the compounded subspace (CS) of the covariance matrix in synthetic aperture (SA) imaging, which aims to reduce the dimensions of the covariance matrix and therefore reduce the computational complexity. Multiwave spatial smoothing is applied to the echo signals for the accurate estimation of the covariance matrix, and adaptive weight vectors are calculated from the low-dimensional subspace of the original covariance matrix. We conducted simulation, experimental and in vivo studies to verify the performance of the proposed method. The results indicate that the proposed method performs well in maintaining the advantage of high spatial resolution and effectively reduces the computational complexity compared with the standard MV beamformer. In addition, the proposed method shows good robustness against sound velocity errors.</p>","PeriodicalId":49401,"journal":{"name":"Ultrasonic Imaging","volume":null,"pages":null},"PeriodicalIF":2.3,"publicationDate":"2021-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1177/0161734620973945","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"38744829","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Photoacoustic signal recorded by photoacoustic imaging system can be modeled as convolution of initial photoacoustic response by the photoacoustic absorber with the system impulse response. Our goal was to compute the size of photoacoustic absorber using the initial photoacoustic response, deconvolved from the recorded photoacoustic data. For deconvolution, we proposed to use the impulse response of the photoacoustic system, estimated using discrete wavelet transform based homomorphic filtering. The proposed method was implemented on experimentally acquired photoacoustic data generated by different phantoms and also verified by a simulation study involving photoacoustic targets, identical to the phantoms in experimental study. The photoacoustic system impulse response, which was estimated using the acquired photoacoustic signal corresponding to a lead pencil, was used to extract initial photoacoustic response corresponding to a mustard seed of 0.65 mm radius. The recovered radius values of the mustard seed, corresponding to the experimental and simulation studies were 0.6 mm and 0.7 mm.
{"title":"Computation of Photoacoustic Absorber Size from Deconvolved Photoacoustic Signal Using Estimated System Impulse Response.","authors":"Nikita Rathi, Saugata Sinha, Bhargava Chinni, Vikram Dogra, Navalgund Rao","doi":"10.1177/0161734620977838","DOIUrl":"https://doi.org/10.1177/0161734620977838","url":null,"abstract":"<p><p>Photoacoustic signal recorded by photoacoustic imaging system can be modeled as convolution of initial photoacoustic response by the photoacoustic absorber with the system impulse response. Our goal was to compute the size of photoacoustic absorber using the initial photoacoustic response, deconvolved from the recorded photoacoustic data. For deconvolution, we proposed to use the impulse response of the photoacoustic system, estimated using discrete wavelet transform based homomorphic filtering. The proposed method was implemented on experimentally acquired photoacoustic data generated by different phantoms and also verified by a simulation study involving photoacoustic targets, identical to the phantoms in experimental study. The photoacoustic system impulse response, which was estimated using the acquired photoacoustic signal corresponding to a lead pencil, was used to extract initial photoacoustic response corresponding to a mustard seed of 0.65 mm radius. The recovered radius values of the mustard seed, corresponding to the experimental and simulation studies were 0.6 mm and 0.7 mm.</p>","PeriodicalId":49401,"journal":{"name":"Ultrasonic Imaging","volume":null,"pages":null},"PeriodicalIF":2.3,"publicationDate":"2021-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1177/0161734620977838","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"38744827","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-01-01DOI: 10.1177/0161734620974273
Alex Noel Joseph Raj, Ruban Nersisson, Vijayalakshmi G V Mahesh, Zhemin Zhuang
Nipple is a vital landmark in the breast lesion diagnosis. Although there are advanced computer-aided detection (CADe) systems for nipple detection in breast mediolateral oblique (MLO) views of mammogram images, few academic works address the coronal views of breast ultrasound (BUS) images. This paper addresses a novel CADe system to locate the Nipple Shadow Area (NSA) in ultrasound images. Here the Hu Moments and Gray-level Co-occurrence Matrix (GLCM) were calculated through an iterative sliding window for the extraction of shape and texture features. These features are then concatenated and fed into an Artificial Neural Network (ANN) to obtain probable NSA's. Later, contour features, such as shape complexity through fractal dimension, edge distance from the periphery and contour area, were computed and passed into a Support Vector Machine (SVM) to identify the accurate NSA in each case. The coronal plane BUS dataset is built upon our own, which consists of 64 images from 13 patients. The test results show that the proposed CADe system achieves 91.99% accuracy, 97.55% specificity, 82.46% sensitivity and 88% F-score on our dataset.
{"title":"Nipple Localization in Automated Whole Breast Ultrasound Coronal Scans Using Ensemble Learning.","authors":"Alex Noel Joseph Raj, Ruban Nersisson, Vijayalakshmi G V Mahesh, Zhemin Zhuang","doi":"10.1177/0161734620974273","DOIUrl":"https://doi.org/10.1177/0161734620974273","url":null,"abstract":"<p><p>Nipple is a vital landmark in the breast lesion diagnosis. Although there are advanced computer-aided detection (CADe) systems for nipple detection in breast mediolateral oblique (MLO) views of mammogram images, few academic works address the coronal views of breast ultrasound (BUS) images. This paper addresses a novel CADe system to locate the Nipple Shadow Area (NSA) in ultrasound images. Here the Hu Moments and Gray-level Co-occurrence Matrix (GLCM) were calculated through an iterative sliding window for the extraction of shape and texture features. These features are then concatenated and fed into an Artificial Neural Network (ANN) to obtain probable NSA's. Later, contour features, such as shape complexity through fractal dimension, edge distance from the periphery and contour area, were computed and passed into a Support Vector Machine (SVM) to identify the accurate NSA in each case. The coronal plane BUS dataset is built upon our own, which consists of 64 images from 13 patients. The test results show that the proposed CADe system achieves 91.99% accuracy, 97.55% specificity, 82.46% sensitivity and 88% F-score on our dataset.</p>","PeriodicalId":49401,"journal":{"name":"Ultrasonic Imaging","volume":null,"pages":null},"PeriodicalIF":2.3,"publicationDate":"2021-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1177/0161734620974273","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"38744828","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-01-01DOI: 10.1177/0161734620976408
Jiangang Chen, Jiawei Li, Chao He, Wenfang Li, Qingli Li
It is of vital importance to identify the pleural line when performing lung ultrasound, as the pleural line not only indicates the interface between the chest wall and lung, but offers additional diagnostic information. In the current clinical practice, the pleural line is visually detected and evaluated by clinicians, which requires experiences and skills with challenges for the novice. In this study, we developed a computer-aided technique for automated pleural line detection using ultrasound. The method first utilized the Radon transform to detect line objects in the ultrasound images. The relation of the body mass index and chest wall thickness was then applied to estimate the range of the pleural thickness, based on which the pleural line was detected together with the consideration of the ultrasonic properties of the pleural line. The proposed method was validated by testing 83 ultrasound data sets collected from 21 pneumothorax patients. The pleural lines were successfully identified in 76 data sets by the automated method (successful detection rate 91.6%). In those successful cases, the depths of the pleural lines measured by the automated method agreed with those manually measured as confirmed with the Bland-Altman test. The measurement errors were below 5% in terms of the pleural line depth. As a conclusion, the proposed method could detect the pleural line in an automated manner in the defined data set. In addition, the method may potentially act as an alternative to visual inspection after further tests on more diverse data sets are performed in future studies.
{"title":"Automated Pleural Line Detection Based on Radon Transform Using Ultrasound.","authors":"Jiangang Chen, Jiawei Li, Chao He, Wenfang Li, Qingli Li","doi":"10.1177/0161734620976408","DOIUrl":"https://doi.org/10.1177/0161734620976408","url":null,"abstract":"<p><p>It is of vital importance to identify the pleural line when performing lung ultrasound, as the pleural line not only indicates the interface between the chest wall and lung, but offers additional diagnostic information. In the current clinical practice, the pleural line is visually detected and evaluated by clinicians, which requires experiences and skills with challenges for the novice. In this study, we developed a computer-aided technique for automated pleural line detection using ultrasound. The method first utilized the Radon transform to detect line objects in the ultrasound images. The relation of the body mass index and chest wall thickness was then applied to estimate the range of the pleural thickness, based on which the pleural line was detected together with the consideration of the ultrasonic properties of the pleural line. The proposed method was validated by testing 83 ultrasound data sets collected from 21 pneumothorax patients. The pleural lines were successfully identified in 76 data sets by the automated method (successful detection rate 91.6%). In those successful cases, the depths of the pleural lines measured by the automated method agreed with those manually measured as confirmed with the Bland-Altman test. The measurement errors were below 5% in terms of the pleural line depth. As a conclusion, the proposed method could detect the pleural line in an automated manner in the defined data set. In addition, the method may potentially act as an alternative to visual inspection after further tests on more diverse data sets are performed in future studies.</p>","PeriodicalId":49401,"journal":{"name":"Ultrasonic Imaging","volume":null,"pages":null},"PeriodicalIF":2.3,"publicationDate":"2021-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1177/0161734620976408","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"38744826","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-11-01DOI: 10.1177/0161734620959780
Xingyu Liang, Ziyao Li, Lei Zhang, Dongmo Wang, Jiawei Tian
To explore the value of contrast-enhanced ultrasound (CEUS) in the differential diagnosis of molecular subtypes of breast cancer. Sixty-two cases of breast cancer were divided into luminal epithelium A or B subtype (luminal A/B), Her-2 over-expression subtype and triple negative subtype (TN). CEUS and routine ultrasonography were performed for all patients before surgery. (1) The luminal epithelium subtype contrast enhancement pattern was more likely to present with radial edge (76.92%, p < 0.05) and low perfusion (69.23%, p < 0.05). The maximum intensity (IMAX) was lower in the luminal epithelium subtype (p < 0.05). (2) The Her-2 over-expression subtype contrast enhancement pattern was more likely to present with centripetal enhancement (93.75%, p < 0.05) and perfusion defect (75.0%, p < 0.05), and the time to peak (TTP) was shorter (80.0%, p < 0.05). (3) The contrast enhancement pattern of the triple negative subtype was shown to have a clear boundary. Compared to the other two subtypes, the triple negative subtype did not have significantly different perfusion parameters (p > 0.05). (4) Our study showed that the areas under the ROC curve for radial edge, low perfusion and IMAX for the luminal epithelium subtype breast lesions were 76.5%, 75.6%, and 82.1%, respectively. Additionally, the areas under the ROC curve for centripetal enhancement, perfusion defect and TTP for the Her-2 over-expression subtype breast lesions were 68.6%, 92.4%, and 97.8%, respectively. The sensitivity, specificity, and diagnostic accuracy of clear boundaries in detecting triple negative subtype breast lesions were 90.5%, 80.0%, and 91.9%, respectively.
{"title":"Application of Contrast-Enhanced Ultrasound in the Differential Diagnosis of Different Molecular Subtypes of Breast Cancer.","authors":"Xingyu Liang, Ziyao Li, Lei Zhang, Dongmo Wang, Jiawei Tian","doi":"10.1177/0161734620959780","DOIUrl":"https://doi.org/10.1177/0161734620959780","url":null,"abstract":"<p><p>To explore the value of contrast-enhanced ultrasound (CEUS) in the differential diagnosis of molecular subtypes of breast cancer. Sixty-two cases of breast cancer were divided into luminal epithelium A or B subtype (luminal A/B), Her-2 over-expression subtype and triple negative subtype (TN). CEUS and routine ultrasonography were performed for all patients before surgery. (1) The luminal epithelium subtype contrast enhancement pattern was more likely to present with radial edge (76.92%, <i>p</i> < 0.05) and low perfusion (69.23%, <i>p</i> < 0.05). The maximum intensity (IMAX) was lower in the luminal epithelium subtype (<i>p</i> < 0.05). (2) The Her-2 over-expression subtype contrast enhancement pattern was more likely to present with centripetal enhancement (93.75%, <i>p</i> < 0.05) and perfusion defect (75.0%, <i>p</i> < 0.05), and the time to peak (TTP) was shorter (80.0%, <i>p</i> < 0.05). (3) The contrast enhancement pattern of the triple negative subtype was shown to have a clear boundary. Compared to the other two subtypes, the triple negative subtype did not have significantly different perfusion parameters (<i>p</i> > 0.05). (4) Our study showed that the areas under the ROC curve for radial edge, low perfusion and IMAX for the luminal epithelium subtype breast lesions were 76.5%, 75.6%, and 82.1%, respectively. Additionally, the areas under the ROC curve for centripetal enhancement, perfusion defect and TTP for the Her-2 over-expression subtype breast lesions were 68.6%, 92.4%, and 97.8%, respectively. The sensitivity, specificity, and diagnostic accuracy of clear boundaries in detecting triple negative subtype breast lesions were 90.5%, 80.0%, and 91.9%, respectively.</p>","PeriodicalId":49401,"journal":{"name":"Ultrasonic Imaging","volume":null,"pages":null},"PeriodicalIF":2.3,"publicationDate":"2020-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1177/0161734620959780","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"38461718","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-11-01Epub Date: 2020-09-18DOI: 10.1177/0161734620956897
Kun Wang, Yuanyuan Pu, Yufeng Zhang, Pei Wang
The intima media thickness (IMT) of the common carotid artery (CCA) can be used to predict the risk of atherosclerosis. Many image segmentation techniques have been used for IMT measurement. However, severe noise in the ultrasound image can lead to erroneous segmentation results. To improve the robustness to noise, a fully automatic method, based on an improved Otsu's method and an adaptive wind-driven optimization technique, is proposed for estimating the IMT (denoted as "improved Otsu-AWDO"). First, an advanced despeckling filter, i.e., " Nagare's filter" is used to address the speckle noise in the carotid ultrasound images. Next, an improved fuzzy contrast method (IFC) is used to enhance the region of the intima media complex (IMC) in the blurred filtered images. Then, a new method is used for automatic extraction of the region of interest (ROI). Finally, the lumen intima interface and media adventitia interface are segmented from the IMC using improved Otsu-AWDO. Then, 156 B-mode longitudinal carotid ultrasound images of six different datasets are used to evaluate the performance of the automatic measurements. The results indicate that the absolute error of proposed method is only 10.1 ± 9.6 (mean ± std in μm). Moreover, the proposed method has a correlation coefficient as high as 0.9922, and a bias as low as 0.0007. From comparison with previous methods, we can conclude that the proposed method has strong robustness and can provide accurate IMT estimations.
{"title":"Fully Automatic Measurement of Intima-Media Thickness in Ultrasound Images of the Common Carotid Artery Based on Improved Otsu's Method and Adaptive Wind Driven Optimization.","authors":"Kun Wang, Yuanyuan Pu, Yufeng Zhang, Pei Wang","doi":"10.1177/0161734620956897","DOIUrl":"https://doi.org/10.1177/0161734620956897","url":null,"abstract":"<p><p>The intima media thickness (IMT) of the common carotid artery (CCA) can be used to predict the risk of atherosclerosis. Many image segmentation techniques have been used for IMT measurement. However, severe noise in the ultrasound image can lead to erroneous segmentation results. To improve the robustness to noise, a fully automatic method, based on an improved Otsu's method and an adaptive wind-driven optimization technique, is proposed for estimating the IMT (denoted as \"improved Otsu-AWDO\"). First, an advanced despeckling filter, i.e., \" Nagare's filter\" is used to address the speckle noise in the carotid ultrasound images. Next, an improved fuzzy contrast method (IFC) is used to enhance the region of the intima media complex (IMC) in the blurred filtered images. Then, a new method is used for automatic extraction of the region of interest (ROI). Finally, the lumen intima interface and media adventitia interface are segmented from the IMC using improved Otsu-AWDO. Then, 156 B-mode longitudinal carotid ultrasound images of six different datasets are used to evaluate the performance of the automatic measurements. The results indicate that the absolute error of proposed method is only 10.1 ± 9.6 (mean ± std in μm). Moreover, the proposed method has a correlation coefficient as high as 0.9922, and a bias as low as 0.0007. From comparison with previous methods, we can conclude that the proposed method has strong robustness and can provide accurate IMT estimations.</p>","PeriodicalId":49401,"journal":{"name":"Ultrasonic Imaging","volume":null,"pages":null},"PeriodicalIF":2.3,"publicationDate":"2020-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1177/0161734620956897","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"38396280","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-11-01Epub Date: 2020-08-28DOI: 10.1177/0161734620952683
Pan Li, Xuebing Yang, Guanjun Yin, Jianzhong Guo
Muscle fatigue often occurs over a long period of exercise, and it can increase the risk of muscle injury. Evaluating the state of muscle fatigue can avoid unnecessary overtraining and injury of the muscle. Ultrasound imaging can non-invasively visualize muscle tissue in real-time. Image entropy is commonly used to characterize the texture of an image. In this study, we evaluated changes in the ultrasound image entropy (USIE) during the fatigue process. Twelve volunteers performed static sustained contractions of biceps brachii at four different intensities (20%, 30%, 40%, and 50% of maximal voluntary contraction torque). The ultrasound images and surface electromyography (sEMG) signals were acquired during exercise to fatigue. We found that (1) the root-mean-square of the sEMG signal increased, the USIE decreased significantly with time during the sustained contractions; (2) the maximum endurance time (MET) and the decline percentage of USIE were significantly different (p < .05) among the four contraction intensities; (3) the decline slope of USIE of the same volunteer was basically the same at different contraction intensities. The USIE could be a new method for the evaluation of skeletal muscle fatigue state.
{"title":"Skeletal Muscle Fatigue State Evaluation with Ultrasound Image Entropy.","authors":"Pan Li, Xuebing Yang, Guanjun Yin, Jianzhong Guo","doi":"10.1177/0161734620952683","DOIUrl":"https://doi.org/10.1177/0161734620952683","url":null,"abstract":"<p><p>Muscle fatigue often occurs over a long period of exercise, and it can increase the risk of muscle injury. Evaluating the state of muscle fatigue can avoid unnecessary overtraining and injury of the muscle. Ultrasound imaging can non-invasively visualize muscle tissue in real-time. Image entropy is commonly used to characterize the texture of an image. In this study, we evaluated changes in the ultrasound image entropy (USIE) during the fatigue process. Twelve volunteers performed static sustained contractions of biceps brachii at four different intensities (20%, 30%, 40%, and 50% of maximal voluntary contraction torque). The ultrasound images and surface electromyography (sEMG) signals were acquired during exercise to fatigue. We found that (1) the root-mean-square of the sEMG signal increased, the USIE decreased significantly with time during the sustained contractions; (2) the maximum endurance time (MET) and the decline percentage of USIE were significantly different (<i>p</i> < .05) among the four contraction intensities; (3) the decline slope of USIE of the same volunteer was basically the same at different contraction intensities. The USIE could be a new method for the evaluation of skeletal muscle fatigue state.</p>","PeriodicalId":49401,"journal":{"name":"Ultrasonic Imaging","volume":null,"pages":null},"PeriodicalIF":2.3,"publicationDate":"2020-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1177/0161734620952683","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"38320658","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-11-01DOI: 10.1177/0161734620961005
Puja Bharti, Deepti Mittal
Ultrasound images, having low contrast and noise, adversely impact in the detection of abnormalities. In view of this, an enhancement method is proposed in this work to reduce noise and improve contrast of ultrasound images. The proposed method is based on scaling with neutrosophic similarity score (NSS), where an image is represented in the neutrosophic domain through three membership subsets T, I, and F denoting the degree of truth, indeterminacy, and falseness, respectively. The NSS measures the belonging degree of pixel to the texture using multi-criteria that is based on intensity, local mean intensity and edge detection. Then, NSS is utilized to extract the enhanced coefficient and this enhanced coefficient is applied to scale the input image. This scaling reflects contrast improvement and denoising effect on ultrasound images. The performance of proposed enhancement method is evaluated on clinical ultrasound images, using both subjective and objective image quality measures. In subjective evaluation, with proposed method, overall best score of 4.3 was obtained and that was 44% higher than the score of original images. These results were also supported by objective measures. The results demonstrated that the proposed method outperformed the other methods in terms of mean brightness preservation, edge preservation, structural similarity, and human perception-based image quality assessment. Thus, the proposed method can be used in computer-aided diagnosis systems and to visually assist radiologists in their interactive-decision-making task.
{"title":"An Ultrasound Image Enhancement Method Using Neutrosophic Similarity Score.","authors":"Puja Bharti, Deepti Mittal","doi":"10.1177/0161734620961005","DOIUrl":"https://doi.org/10.1177/0161734620961005","url":null,"abstract":"<p><p>Ultrasound images, having low contrast and noise, adversely impact in the detection of abnormalities. In view of this, an enhancement method is proposed in this work to reduce noise and improve contrast of ultrasound images. The proposed method is based on scaling with neutrosophic similarity score (NSS), where an image is represented in the neutrosophic domain through three membership subsets <i>T, I</i>, and <i>F</i> denoting the degree of truth, indeterminacy, and falseness, respectively. The NSS measures the belonging degree of pixel to the texture using multi-criteria that is based on intensity, local mean intensity and edge detection. Then, NSS is utilized to extract the enhanced coefficient and this enhanced coefficient is applied to scale the input image. This scaling reflects contrast improvement and denoising effect on ultrasound images. The performance of proposed enhancement method is evaluated on clinical ultrasound images, using both subjective and objective image quality measures. In subjective evaluation, with proposed method, overall best score of 4.3 was obtained and that was 44% higher than the score of original images. These results were also supported by objective measures. The results demonstrated that the proposed method outperformed the other methods in terms of mean brightness preservation, edge preservation, structural similarity, and human perception-based image quality assessment. Thus, the proposed method can be used in computer-aided diagnosis systems and to visually assist radiologists in their interactive-decision-making task.</p>","PeriodicalId":49401,"journal":{"name":"Ultrasonic Imaging","volume":null,"pages":null},"PeriodicalIF":2.3,"publicationDate":"2020-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1177/0161734620961005","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"38457231","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-07-01DOI: 10.1177/0161734620951216
Nirvedh H Meshram, Carol C Mitchell, Stephanie Wilbrand, Robert J Dempsey, Tomy Varghese
Carotid plaque segmentation in ultrasound longitudinal B-mode images using deep learning is presented in this work. We report on 101 severely stenotic carotid plaque patients. A standard U-Net is compared with a dilated U-Net architecture in which the dilated convolution layers were used in the bottleneck. Both a fully automatic and a semi-automatic approach with a bounding box was implemented. The performance degradation in plaque segmentation due to errors in the bounding box is quantified. We found that the bounding box significantly improved the performance of the networks with U-Net Dice coefficients of 0.48 for automatic and 0.83 for semi-automatic segmentation of plaque. Similar results were also obtained for the dilated U-Net with Dice coefficients of 0.55 for automatic and 0.84 for semi-automatic when compared to manual segmentations of the same plaque by an experienced sonographer. A 5% error in the bounding box in both dimensions reduced the Dice coefficient to 0.79 and 0.80 for U-Net and dilated U-Net respectively.
{"title":"Deep Learning for Carotid Plaque Segmentation using a Dilated U-Net Architecture.","authors":"Nirvedh H Meshram, Carol C Mitchell, Stephanie Wilbrand, Robert J Dempsey, Tomy Varghese","doi":"10.1177/0161734620951216","DOIUrl":"https://doi.org/10.1177/0161734620951216","url":null,"abstract":"<p><p>Carotid plaque segmentation in ultrasound longitudinal B-mode images using deep learning is presented in this work. We report on 101 severely stenotic carotid plaque patients. A standard U-Net is compared with a dilated U-Net architecture in which the dilated convolution layers were used in the bottleneck. Both a fully automatic and a semi-automatic approach with a bounding box was implemented. The performance degradation in plaque segmentation due to errors in the bounding box is quantified. We found that the bounding box significantly improved the performance of the networks with U-Net Dice coefficients of 0.48 for automatic and 0.83 for semi-automatic segmentation of plaque. Similar results were also obtained for the dilated U-Net with Dice coefficients of 0.55 for automatic and 0.84 for semi-automatic when compared to manual segmentations of the same plaque by an experienced sonographer. A 5% error in the bounding box in both dimensions reduced the Dice coefficient to 0.79 and 0.80 for U-Net and dilated U-Net respectively.</p>","PeriodicalId":49401,"journal":{"name":"Ultrasonic Imaging","volume":null,"pages":null},"PeriodicalIF":2.3,"publicationDate":"2020-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1177/0161734620951216","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"38441228","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We aimed to use deep learning with convolutional neural networks (CNNs) to discriminate images of benign and malignant breast masses on ultrasound shear wave elastography (SWE). We retrospectively gathered 158 images of benign masses and 146 images of malignant masses as training data for SWE. A deep learning model was constructed using several CNN architectures (Xception, InceptionV3, InceptionResNetV2, DenseNet121, DenseNet169, and NASNetMobile) with 50, 100, and 200 epochs. We analyzed SWE images of 38 benign masses and 35 malignant masses as test data. Two radiologists interpreted these test data through a consensus reading using a 5-point visual color assessment (SWEc) and the mean elasticity value (in kPa) (SWEe). Sensitivity, specificity, and area under the receiver operating characteristic curve (AUC) were calculated. The best CNN model (which was DenseNet169 with 100 epochs), SWEc, and SWEe had a sensitivity of 0.857, 0.829, and 0.914 and a specificity of 0.789, 0.737, and 0.763 respectively. The CNNs exhibited a mean AUC of 0.870 (range, 0.844-0.898), and SWEc and SWEe had an AUC of 0.821 and 0.855. The CNNs had an equal or better diagnostic performance compared with radiologist readings. DenseNet169 with 100 epochs, Xception with 50 epochs, and Xception with 100 epochs had a better diagnostic performance compared with SWEc (P = 0.018-0.037). Deep learning with CNNs exhibited equal or higher AUC compared with radiologists when discriminating benign from malignant breast masses on ultrasound SWE.
{"title":"Classification of Breast Masses on Ultrasound Shear Wave Elastography using Convolutional Neural Networks.","authors":"Tomoyuki Fujioka, Leona Katsuta, Kazunori Kubota, Mio Mori, Yuka Kikuchi, Arisa Kato, Goshi Oda, Tsuyoshi Nakagawa, Yoshio Kitazume, Ukihide Tateishi","doi":"10.1177/0161734620932609","DOIUrl":"https://doi.org/10.1177/0161734620932609","url":null,"abstract":"<p><p>We aimed to use deep learning with convolutional neural networks (CNNs) to discriminate images of benign and malignant breast masses on ultrasound shear wave elastography (SWE). We retrospectively gathered 158 images of benign masses and 146 images of malignant masses as training data for SWE. A deep learning model was constructed using several CNN architectures (Xception, InceptionV3, InceptionResNetV2, DenseNet121, DenseNet169, and NASNetMobile) with 50, 100, and 200 epochs. We analyzed SWE images of 38 benign masses and 35 malignant masses as test data. Two radiologists interpreted these test data through a consensus reading using a 5-point visual color assessment (SWEc) and the mean elasticity value (in kPa) (SWEe). Sensitivity, specificity, and area under the receiver operating characteristic curve (AUC) were calculated. The best CNN model (which was DenseNet169 with 100 epochs), SWEc, and SWEe had a sensitivity of 0.857, 0.829, and 0.914 and a specificity of 0.789, 0.737, and 0.763 respectively. The CNNs exhibited a mean AUC of 0.870 (range, 0.844-0.898), and SWEc and SWEe had an AUC of 0.821 and 0.855. The CNNs had an equal or better diagnostic performance compared with radiologist readings. DenseNet169 with 100 epochs, Xception with 50 epochs, and Xception with 100 epochs had a better diagnostic performance compared with SWEc (<i>P</i> = 0.018-0.037). Deep learning with CNNs exhibited equal or higher AUC compared with radiologists when discriminating benign from malignant breast masses on ultrasound SWE.</p>","PeriodicalId":49401,"journal":{"name":"Ultrasonic Imaging","volume":null,"pages":null},"PeriodicalIF":2.3,"publicationDate":"2020-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1177/0161734620932609","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"38012322","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}