Pub Date : 2025-11-01Epub Date: 2025-08-28DOI: 10.1177/01617346251362168
Francesco Bianconi, Muhammad Usama Khan, Hongbo Du, Sabah Jassim
Breast ultrasound images play a pivotal role in assessing the nature of suspicious breast lesions, particularly in patients with dense tissue. Computerized analysis of breast ultrasound images has the potential to assist the physician in the clinical decision-making and improve subjective interpretation. We assess the performance of conventional features, deep learning features and ensemble schemes for discriminating benign versus malignant breast lesions on ultrasound images. A total of 19 individual feature sets (1 morphological, 2 first-order, 10 texture-based, and 6 CNN-based) were included in the analysis. Furthermore, four combined feature sets (Best by class; Top 3, 5, and 7) and four fusion schemes (feature concatenation, majority voting, sum and product rule) were considered to generate ensemble models. The experiments were carried out on three independent open-access datasets respectively containing 252 (154 benign, 98 malignant), 232 (109 benign, 123 malignant), and 281 (187 benign, 94 malignant) lesions. CNN-based features outperformed the other individual descriptors achieving levels of accuracy between 77.4% and 83.6%, followed by morphological features (71.6%-80.8%) and histograms of oriented gradients (71.4%-77.6%). Ensemble models further improved the accuracy to 80.2% to 87.5%. Fusion schemes based on product and sum rule were generally superior to feature concatenation and majority voting. Combining individual feature sets by ensemble schemes demonstrates advantages for discriminating benign versus malignant breast lesions on ultrasound images.
乳腺超声图像在评估可疑乳腺病变的性质方面起着关键作用,特别是在致密组织患者中。乳房超声图像的计算机化分析有可能帮助医生在临床决策和提高主观解释。我们评估了常规特征、深度学习特征和集成方案在超声图像上区分乳腺良性与恶性病变的性能。共有19个单独的特征集(1个形态学特征集,2个一阶特征集,10个基于纹理的特征集,6个基于cnn的特征集)被纳入分析。此外,考虑了四种组合特征集(Best by class; Top 3、5和7)和四种融合方案(特征拼接、多数投票、和积规则)来生成集成模型。实验在三个独立的开放获取数据集上进行,分别包含252个(154个良性,98个恶性),232个(109个良性,123个恶性)和281个(187个良性,94个恶性)病变。基于cnn的特征优于其他单个描述符,准确率在77.4%到83.6%之间,其次是形态特征(71.6%到80.8%)和梯度方向直方图(71.4%到77.6%)。集成模型进一步提高了准确率,达到80.2% ~ 87.5%。基于乘积和规则的融合方案总体上优于特征拼接和多数投票。综合方案结合个体特征集证明了在超声图像上区分乳腺良恶性病变的优势。
{"title":"Experimental Assessment of Conventional Features, CNN-Based Features and Ensemble Schemes for Discriminating Benign Versus Malignant Lesions on Breast Ultrasound Images.","authors":"Francesco Bianconi, Muhammad Usama Khan, Hongbo Du, Sabah Jassim","doi":"10.1177/01617346251362168","DOIUrl":"10.1177/01617346251362168","url":null,"abstract":"<p><p>Breast ultrasound images play a pivotal role in assessing the nature of suspicious breast lesions, particularly in patients with dense tissue. Computerized analysis of breast ultrasound images has the potential to assist the physician in the clinical decision-making and improve subjective interpretation. We assess the performance of conventional features, deep learning features and ensemble schemes for discriminating benign versus malignant breast lesions on ultrasound images. A total of 19 individual feature sets (1 morphological, 2 first-order, 10 texture-based, and 6 CNN-based) were included in the analysis. Furthermore, four combined feature sets (Best by class; Top 3, 5, and 7) and four fusion schemes (feature concatenation, majority voting, sum and product rule) were considered to generate ensemble models. The experiments were carried out on three independent open-access datasets respectively containing 252 (154 benign, 98 malignant), 232 (109 benign, 123 malignant), and 281 (187 benign, 94 malignant) lesions. CNN-based features outperformed the other individual descriptors achieving levels of accuracy between 77.4% and 83.6%, followed by morphological features (71.6%-80.8%) and histograms of oriented gradients (71.4%-77.6%). Ensemble models further improved the accuracy to 80.2% to 87.5%. Fusion schemes based on product and sum rule were generally superior to feature concatenation and majority voting. Combining individual feature sets by ensemble schemes demonstrates advantages for discriminating benign versus malignant breast lesions on ultrasound images.</p>","PeriodicalId":49401,"journal":{"name":"Ultrasonic Imaging","volume":" ","pages":"256-269"},"PeriodicalIF":2.5,"publicationDate":"2025-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144976391","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-09-01Epub Date: 2025-07-24DOI: 10.1177/01617346251342609
Anne-Lise Duroy, Olivier Basset, Elisabeth Brusseau
Nowadays, detection and characterization of breast pathologies is an essential issue. Quasi-static ultrasound elastography have been proposed to provide information about the mechanical properties of tissues during the patient examination. However, reconstructing tissue properties is a challenging task as it requires to solve an ill-posed inverse problem, with generally no available boundary information and solely 2D estimated displacements, whereas the problem is inherently three-dimensional. In this paper, a Virtual fields based-method is investigated to reconstruct Young's modulus maps from the knowledge of internal displacements and the force applied. The media examined are assumed to be linear elastic and isotropic, and to overcome the lack of 3D data, the plane stress conditions are considered. The developed method is assessed with plane-stress and 3D simulations, as well as phantoms and patient data. For all the media examined, the reconstructed Young's modulus maps clearly reveal regions with different stiffnesses. The stiffness contrast between regions is accurately estimated for the different plane stress simulations, but underestimated for the 3D simulations. These results can be expected as plane stress conditions are no longer satisfied in the 3D simulations. On the other hand, for all these cases, the size and the position of the different regions are correctly estimated when the region is larger than a pixel. Finally, similar comments can be made for the experimental results. More especially for the in vivo results, the inclusion-to-background Young's modulus ratio is estimated in average around 6.61 for the carcinoma and 4.57 for the fibroedenoma, which is consistent with the literature.
{"title":"Elastic Modulus Imaging for Breast Application Using a Virtual Fields Based-Method in Quasi-Static Ultrasound Elastography.","authors":"Anne-Lise Duroy, Olivier Basset, Elisabeth Brusseau","doi":"10.1177/01617346251342609","DOIUrl":"10.1177/01617346251342609","url":null,"abstract":"<p><p>Nowadays, detection and characterization of breast pathologies is an essential issue. Quasi-static ultrasound elastography have been proposed to provide information about the mechanical properties of tissues during the patient examination. However, reconstructing tissue properties is a challenging task as it requires to solve an ill-posed inverse problem, with generally no available boundary information and solely 2D estimated displacements, whereas the problem is inherently three-dimensional. In this paper, a Virtual fields based-method is investigated to reconstruct Young's modulus maps from the knowledge of internal displacements and the force applied. The media examined are assumed to be linear elastic and isotropic, and to overcome the lack of 3D data, the plane stress conditions are considered. The developed method is assessed with plane-stress and 3D simulations, as well as phantoms and patient data. For all the media examined, the reconstructed Young's modulus maps clearly reveal regions with different stiffnesses. The stiffness contrast between regions is accurately estimated for the different plane stress simulations, but underestimated for the 3D simulations. These results can be expected as plane stress conditions are no longer satisfied in the 3D simulations. On the other hand, for all these cases, the size and the position of the different regions are correctly estimated when the region is larger than a pixel. Finally, similar comments can be made for the experimental results. More especially for the in vivo results, the inclusion-to-background Young's modulus ratio is estimated in average around 6.61 for the carcinoma and 4.57 for the fibroedenoma, which is consistent with the literature.</p>","PeriodicalId":49401,"journal":{"name":"Ultrasonic Imaging","volume":" ","pages":"189-201"},"PeriodicalIF":2.5,"publicationDate":"2025-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144700207","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-09-01Epub Date: 2025-06-20DOI: 10.1177/01617346251346060
Dongchen Ling, Xiong Jiao
Breast cancer is the leading cancer threatening women's health. In recent years, deep neural networks have outperformed traditional methods in terms of both accuracy and efficiency for breast cancer classification. However, most ultrasound-based breast cancer classification methods rely on single-perspective information, which may lead to higher misdiagnosis rates. In this study, we propose a multi-view knowledge distillation vision transformer architecture (MVKD-Trans) for the classification of benign and malignant breast tumors. We utilize multi-view ultrasound images of the same tumor to capture diverse features. Additionally, we employ a shuffle module for feature fusion, extracting channel and spatial dual-attention information to improve the model's representational capability. Given the limited computational capacity of ultrasound devices, we also utilize knowledge distillation (KD) techniques to compress the multi-view network into a single-view network. The results show that the accuracy, area under the ROC curve (AUC), sensitivity, specificity, precision, and F1 score of the model are 88.15%, 91.23%, 81.41%, 90.73%, 78.29%, and 79.69%, respectively. The superior performance of our approach, compared to several existing models, highlights its potential to significantly enhance the understanding and classification of breast cancer.
{"title":"MVKD-Trans: A Multi-View Knowledge Distillation Vision Transformer Architecture for Breast Cancer Classification Based on Ultrasound Images.","authors":"Dongchen Ling, Xiong Jiao","doi":"10.1177/01617346251346060","DOIUrl":"10.1177/01617346251346060","url":null,"abstract":"<p><p>Breast cancer is the leading cancer threatening women's health. In recent years, deep neural networks have outperformed traditional methods in terms of both accuracy and efficiency for breast cancer classification. However, most ultrasound-based breast cancer classification methods rely on single-perspective information, which may lead to higher misdiagnosis rates. In this study, we propose a multi-view knowledge distillation vision transformer architecture (MVKD-Trans) for the classification of benign and malignant breast tumors. We utilize multi-view ultrasound images of the same tumor to capture diverse features. Additionally, we employ a shuffle module for feature fusion, extracting channel and spatial dual-attention information to improve the model's representational capability. Given the limited computational capacity of ultrasound devices, we also utilize knowledge distillation (KD) techniques to compress the multi-view network into a single-view network. The results show that the accuracy, area under the ROC curve (AUC), sensitivity, specificity, precision, and F1 score of the model are 88.15%, 91.23%, 81.41%, 90.73%, 78.29%, and 79.69%, respectively. The superior performance of our approach, compared to several existing models, highlights its potential to significantly enhance the understanding and classification of breast cancer.</p>","PeriodicalId":49401,"journal":{"name":"Ultrasonic Imaging","volume":" ","pages":"171-181"},"PeriodicalIF":2.5,"publicationDate":"2025-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144334278","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-09-01Epub Date: 2025-07-03DOI: 10.1177/01617346251346922
Sriharsha Gummadi, Amr Mohammed, Mostafa Alnoury, Fari Fall, Tania Siu Xiao, Kaizer Contreras, Adam Maxwell, Eli Vlaisavljevich, Ji-Bin Liu, Corinne E Wessner, Flemming Forsberg, Allison Goldberg, George Koenig, John R Eisenbrey
Contrast-enhanced ultrasound (CEUS) shows promise in solid organ trauma by identifying areas of disrupted perfusion. In contrast, B-Flow ultrasound offers high fidelity imaging of larger vessels. We hypothesize that contrast-enhanced B-Flow (CEB-Flow) will improve accuracy of hepatic vessel injury delineation, as an adjunct tool to CEUS and future ultrasound-guided therapies. Imaging data was collected using our IACUC approved swine model for traumatic liver injury. All procedures were approved within this IACUC protocol. Sonography was performed using a Logiq E10 scanner with C1-6 probe (GE HealthCare). After ultrasound guided liver trauma, we performed open-abdomen B-Mode ultrasound, CEUS, and CEB-Flow of the injury during infusion of Definity (Lantheus Medical Imaging, N. Billerica, MA). CEUS was performed using coded harmonic imaging and CEB-Flow using a commercial package (GE Healthcare). Twelve swine were used for analysis. Three blinded interpreters were asked to identify injured liver parenchyma and lacerated vessels. Identification rates were compared using ultrasound-guided laceration images and pathology confirmation as a reference standard. Liver injury identification ranged from 88.3% to 100% on CEUS and 50% to 66.7% on CEB-Flow. Consensus identification rates in identifying parenchymal injury were not significantly different (91.7% CEUS vs. 66.7% CEB-Flow, p = .25). Lacerated vessel identification ranged from 41.7% to 58.3% for CEUS and 75.0% to 91.7% for CEB-Flow. Specifically, CEB-Flow demonstrated improved consensus in identifying lacerated vasculature (41.7% CEUS vs. 91.7% CEB-Flow, p = .041). In this swine model study, the combination of CEUS and CEB-Flow could accurately identify and localize traumatic hepatic injury. CEB-Flow may better characterize vessel injury, which in turn may direct and improve bedside management.
对比增强超声(CEUS)通过识别灌注中断的区域在实体器官创伤中显示出希望。相比之下,b流超声提供了大血管的高保真成像。我们假设对比增强B-Flow (CEB-Flow)将提高肝血管损伤描绘的准确性,作为超声造影和未来超声引导治疗的辅助工具。影像学数据是用IACUC批准的猪外伤性肝损伤模型收集的。所有程序都在IACUC协议范围内获得批准。使用Logiq E10扫描仪和C1-6探头(GE HealthCare)进行超声检查。超声引导肝损伤后,我们在输注Definity (Lantheus Medical Imaging, N. Billerica, MA)期间对损伤行开腹b超、超声造影和CEB-Flow检查。CEUS使用编码谐波成像和CEB-Flow进行,使用商业软件包(GE Healthcare)。12头猪用于分析。三名盲传译员被要求识别损伤的肝实质和撕裂的血管。以超声引导下的裂伤图像和病理证实为参考标准,比较其检出率。肝损伤诊断率在CEUS上为88.3%至100%,在CEB-Flow上为50%至66.7%。确认实质损伤的共识识别率无显著差异(91.7% CEUS vs 66.7% CEB-Flow, p = 0.25)。CEUS对撕裂血管的识别范围为41.7%至58.3%,CEB-Flow为75.0%至91.7%。具体来说,CEB-Flow在识别血管破裂方面表现出更高的一致性(41.7% CEUS vs. 91.7% CEB-Flow, p = 0.041)。在猪模型研究中,超声造影和CEB-Flow结合可以准确识别和定位外伤性肝损伤。CEB-Flow可以更好地表征血管损伤,从而指导和改善床边管理。
{"title":"Contrast-Enhanced B-Flow Ultrasound: A Novel Approach to Liver Trauma Imaging.","authors":"Sriharsha Gummadi, Amr Mohammed, Mostafa Alnoury, Fari Fall, Tania Siu Xiao, Kaizer Contreras, Adam Maxwell, Eli Vlaisavljevich, Ji-Bin Liu, Corinne E Wessner, Flemming Forsberg, Allison Goldberg, George Koenig, John R Eisenbrey","doi":"10.1177/01617346251346922","DOIUrl":"10.1177/01617346251346922","url":null,"abstract":"<p><p>Contrast-enhanced ultrasound (CEUS) shows promise in solid organ trauma by identifying areas of disrupted perfusion. In contrast, B-Flow ultrasound offers high fidelity imaging of larger vessels. We hypothesize that contrast-enhanced B-Flow (CEB-Flow) will improve accuracy of hepatic vessel injury delineation, as an adjunct tool to CEUS and future ultrasound-guided therapies. Imaging data was collected using our IACUC approved swine model for traumatic liver injury. All procedures were approved within this IACUC protocol. Sonography was performed using a Logiq E10 scanner with C1-6 probe (GE HealthCare). After ultrasound guided liver trauma, we performed open-abdomen B-Mode ultrasound, CEUS, and CEB-Flow of the injury during infusion of Definity (Lantheus Medical Imaging, N. Billerica, MA). CEUS was performed using coded harmonic imaging and CEB-Flow using a commercial package (GE Healthcare). Twelve swine were used for analysis. Three blinded interpreters were asked to identify injured liver parenchyma and lacerated vessels. Identification rates were compared using ultrasound-guided laceration images and pathology confirmation as a reference standard. Liver injury identification ranged from 88.3% to 100% on CEUS and 50% to 66.7% on CEB-Flow. Consensus identification rates in identifying parenchymal injury were not significantly different (91.7% CEUS vs. 66.7% CEB-Flow, <i>p</i> = .25). Lacerated vessel identification ranged from 41.7% to 58.3% for CEUS and 75.0% to 91.7% for CEB-Flow. Specifically, CEB-Flow demonstrated improved consensus in identifying lacerated vasculature (41.7% CEUS vs. 91.7% CEB-Flow, <i>p</i> = .041). In this swine model study, the combination of CEUS and CEB-Flow could accurately identify and localize traumatic hepatic injury. CEB-Flow may better characterize vessel injury, which in turn may direct and improve bedside management.</p>","PeriodicalId":49401,"journal":{"name":"Ultrasonic Imaging","volume":" ","pages":"182-188"},"PeriodicalIF":2.5,"publicationDate":"2025-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144555534","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-07-01Epub Date: 2025-04-15DOI: 10.1177/01617346251330257
Huihui Zhou, Lin Sang, Yuanyuan Sun, Xue Gong, Jun Zhang, Lina Liu, Junci Wei, Weijie Jiao, Ming Yu
To evaluate the ability of fractal parameters based on ultrasonography to distinguish between hepatocellular carcinoma (HCC) and intrahepatic cholangiocarcinoma (ICC) in patients. This retrospective study was performed to assess the performance of certain ultrasound-based parameters, including fractal dimension (FD), lacunarity (LAC), and FD combined with LAC (FL), in distinguishing HCC from ICC with liver biopsy as the gold standard, information was obtained from 204 eligible patients. Receiver operating characteristic (ROC) curve analysis was conducted, to assess the performance of these parameters in distinguishing between HCC and ICC in patients with or without CLD. The following parameters were significantly different between with and without CLD: the levels of alpha-fetoprotein, abnormal prothrombin, alanine aminotransferase, aspartic acid, total bilirubin and indirect bilirubin (p < .05). The AUC of FL in differentiating HCC from ICC was 0.983 and 0.854 in patients without or with CLD, which were significantly better than that of FD (non-CLD, 0.902 AUC; CLD, 0.647 AUC, p < .05) and LAC (non-CLD, 0.895 AUC; CLD, 0.843 AUC, p < .05). FL can better distinguish between HCC and ICC in patients with or without CLD than FD and LAC. FL may serve as a promising preoperative alternative for distinguishing between the two diseases.Clinical trials:Brief title: Exploration of noninvasive differential diagnosis of benign and malignant liver tumors.URL: https://register.clinicaltrials.govClinicalTrials.gov ID: NCT06524557.
评价超声分形参数对肝细胞癌(HCC)和肝内胆管癌(ICC)的鉴别能力。本回顾性研究从204例符合条件的患者中获得信息,评估基于超声的某些参数,包括分形维数(FD)、腔间隙(LAC)和FD联合LAC (FL)在区分HCC和以肝活检为金标准的ICC中的表现。进行受试者工作特征(ROC)曲线分析,以评估这些参数在有或没有CLD的患者中区分HCC和ICC的性能。CLD患者与非CLD患者的甲胎蛋白、凝血酶原、丙氨酸转氨酶、天冬氨酸、总胆红素和间接胆红素水平(p p p)均有显著差异
{"title":"Performance of Ultrasonography-Based Fractal Parameters in Distinguishing Hepatocellular Carcinoma From Intrahepatic Cholangiocarcinoma.","authors":"Huihui Zhou, Lin Sang, Yuanyuan Sun, Xue Gong, Jun Zhang, Lina Liu, Junci Wei, Weijie Jiao, Ming Yu","doi":"10.1177/01617346251330257","DOIUrl":"10.1177/01617346251330257","url":null,"abstract":"<p><p>To evaluate the ability of fractal parameters based on ultrasonography to distinguish between hepatocellular carcinoma (HCC) and intrahepatic cholangiocarcinoma (ICC) in patients. This retrospective study was performed to assess the performance of certain ultrasound-based parameters, including fractal dimension (FD), lacunarity (LAC), and FD combined with LAC (FL), in distinguishing HCC from ICC with liver biopsy as the gold standard, information was obtained from 204 eligible patients. Receiver operating characteristic (ROC) curve analysis was conducted, to assess the performance of these parameters in distinguishing between HCC and ICC in patients with or without CLD. The following parameters were significantly different between with and without CLD: the levels of alpha-fetoprotein, abnormal prothrombin, alanine aminotransferase, aspartic acid, total bilirubin and indirect bilirubin (<i>p</i> < .05). The AUC of FL in differentiating HCC from ICC was 0.983 and 0.854 in patients without or with CLD, which were significantly better than that of FD (non-CLD, 0.902 AUC; CLD, 0.647 AUC, <i>p</i> < .05) and LAC (non-CLD, 0.895 AUC; CLD, 0.843 AUC, <i>p</i> < .05). FL can better distinguish between HCC and ICC in patients with or without CLD than FD and LAC. FL may serve as a promising preoperative alternative for distinguishing between the two diseases.Clinical trials:Brief title: Exploration of noninvasive differential diagnosis of benign and malignant liver tumors.URL: https://register.clinicaltrials.govClinicalTrials.gov ID: NCT06524557.</p>","PeriodicalId":49401,"journal":{"name":"Ultrasonic Imaging","volume":" ","pages":"115-124"},"PeriodicalIF":2.5,"publicationDate":"2025-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144056587","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
<p><p>Exploring the clinical significance of employing deep learning methodologies on ultrasound images for the development of an automated model to accurately identify pleomorphic adenomas and Warthin tumors in salivary glands. A retrospective study was conducted on 91 patients who underwent ultrasonography examinations between January 2016 and December 2023 and were subsequently diagnosed with pleomorphic adenoma or Warthin's tumor based on postoperative pathological findings. A total of 526 ultrasonography images were collected for analysis. Convolutional neural network (CNN) models, including ResNet18, MobileNetV3Small, and InceptionV3, were trained and validated using these images for the differentiation of pleomorphic adenoma and Warthin's tumor. Performance evaluation metrics such as receiver operating characteristic (ROC) curves, area under the curve (AUC), sensitivity, specificity, positive predictive value, and negative predictive value were utilized. Two ultrasound physicians, with varying levels of expertise, conducted independent evaluations of the ultrasound images. Subsequently, a comparative analysis was performed between the diagnostic outcomes of the ultrasound physicians and the results obtained from the best-performing model. Inter-rater agreement between routine ultrasonography interpretation by the two expert ultrasonographers and the automatic identification diagnosis of the best model in relation to pathological results was assessed using kappa tests. The deep learning models achieved favorable performance in differentiating pleomorphic adenoma from Warthin's tumor. The ResNet18, MobileNetV3Small, and InceptionV3 models exhibited diagnostic accuracies of 82.4% (AUC: 0.932), 87.0% (AUC: 0.946), and 77.8% (AUC: 0.811), respectively. Among these models, MobileNetV3Small demonstrated the highest performance. The experienced ultrasonographer achieved a diagnostic accuracy of 73.5%, with sensitivity, specificity, positive predictive value, and negative predictive value of 73.7%, 73.3%, 77.8%, and 68.8%, respectively. The less-experienced ultrasonographer achieved a diagnostic accuracy of 69.0%, with sensitivity, specificity, positive predictive value, and negative predictive value of 66.7%, 71.4%, 71.4%, and 66.7%, respectively. The kappa test revealed strong consistency between the best-performing deep learning model and postoperative pathological diagnoses (kappa value: .778, <i>p</i>-value < .001). In contrast, the less-experienced ultrasonographer demonstrated poor consistency in image interpretations (kappa value: .380, <i>p</i>-value < .05). The diagnostic accuracy of the best deep learning model was significantly higher than that of the ultrasonographers, and the experienced ultrasonographer exhibited higher diagnostic accuracy than the less-experienced one. This study demonstrates the promising performance of a deep learning-based method utilizing ultrasonography images for the differentiation of pleomorphic adenoma and
{"title":"Deep Learning Based on Ultrasound Images Differentiates Parotid Gland Pleomorphic Adenomas and Warthin Tumors.","authors":"Yajuan Li, Mingchi Zou, Xiaogang Zhou, Xia Long, Xue Liu, Yanfeng Yao","doi":"10.1177/01617346251319410","DOIUrl":"10.1177/01617346251319410","url":null,"abstract":"<p><p>Exploring the clinical significance of employing deep learning methodologies on ultrasound images for the development of an automated model to accurately identify pleomorphic adenomas and Warthin tumors in salivary glands. A retrospective study was conducted on 91 patients who underwent ultrasonography examinations between January 2016 and December 2023 and were subsequently diagnosed with pleomorphic adenoma or Warthin's tumor based on postoperative pathological findings. A total of 526 ultrasonography images were collected for analysis. Convolutional neural network (CNN) models, including ResNet18, MobileNetV3Small, and InceptionV3, were trained and validated using these images for the differentiation of pleomorphic adenoma and Warthin's tumor. Performance evaluation metrics such as receiver operating characteristic (ROC) curves, area under the curve (AUC), sensitivity, specificity, positive predictive value, and negative predictive value were utilized. Two ultrasound physicians, with varying levels of expertise, conducted independent evaluations of the ultrasound images. Subsequently, a comparative analysis was performed between the diagnostic outcomes of the ultrasound physicians and the results obtained from the best-performing model. Inter-rater agreement between routine ultrasonography interpretation by the two expert ultrasonographers and the automatic identification diagnosis of the best model in relation to pathological results was assessed using kappa tests. The deep learning models achieved favorable performance in differentiating pleomorphic adenoma from Warthin's tumor. The ResNet18, MobileNetV3Small, and InceptionV3 models exhibited diagnostic accuracies of 82.4% (AUC: 0.932), 87.0% (AUC: 0.946), and 77.8% (AUC: 0.811), respectively. Among these models, MobileNetV3Small demonstrated the highest performance. The experienced ultrasonographer achieved a diagnostic accuracy of 73.5%, with sensitivity, specificity, positive predictive value, and negative predictive value of 73.7%, 73.3%, 77.8%, and 68.8%, respectively. The less-experienced ultrasonographer achieved a diagnostic accuracy of 69.0%, with sensitivity, specificity, positive predictive value, and negative predictive value of 66.7%, 71.4%, 71.4%, and 66.7%, respectively. The kappa test revealed strong consistency between the best-performing deep learning model and postoperative pathological diagnoses (kappa value: .778, <i>p</i>-value < .001). In contrast, the less-experienced ultrasonographer demonstrated poor consistency in image interpretations (kappa value: .380, <i>p</i>-value < .05). The diagnostic accuracy of the best deep learning model was significantly higher than that of the ultrasonographers, and the experienced ultrasonographer exhibited higher diagnostic accuracy than the less-experienced one. This study demonstrates the promising performance of a deep learning-based method utilizing ultrasonography images for the differentiation of pleomorphic adenoma and ","PeriodicalId":49401,"journal":{"name":"Ultrasonic Imaging","volume":" ","pages":"107-114"},"PeriodicalIF":2.5,"publicationDate":"2025-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143744257","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ultrasound imaging is used to measure the muscle-tendon junction (MTJ) to investigate the mechanical properties of the tendon and the interaction of the muscle-tendon unit in vivo. Although the MTJ can be observed clearly in the resting state, accurate tracking of the MTJ is difficult during muscle contractions due to changes in its morphology. We devised a novel method using an algorithm that extracts and tracks multiple feature points in ultrasound images to automatically measure the MTJ that moves during muscle contraction. Instead of using a single reference image, multiple feature points are used to improve the tracking performance during the deformation of the MTJ. Subsequently, we experimentally evaluated the usefulness of this method. Tests were conducted on 20 healthy participants performing isometric maximal contractions, and ultrasound echo images of the medial gastrocnemius and Achilles tendon junctions were recorded. MTJ excursion was calculated using the developed multiple feature point algorithm and two conventional methods-multi-updating template-matching and modified Lucas-Kanade (LK)-based on automatic and manual analyses. The root mean square error (RMSE) was used to compare the results. The intraclass correlation coefficient (ICC) was used to evaluate the repeatability among examiners. RMSE was 1.57 ± 0.62 for the proposed algorithm and 2.18 ± 0.89 and 1.84 ± 1.13 for the conventional methods. The Bland-Altman plot showed that the proposed method exhibited a lower 95% confidence interval than the two conventional methods. Thus, the proposed algorithm had the smallest error. Furthermore, the ICC values were 0.96, 0.40, and 0.86 for the proposed algorithm, multi-updating template-matching, and the modified LK method, respectively. When tracking an MTJ excursion that flexibly changes its shape, the use of multiple feature points provides robust results and achieves tracking that approximates the manual analysis results.
{"title":"Automated Analysis of Ultrasound Images to Measure Muscle-Tendon Junction Excursions by Using the Multiple Feature Point Tracking Algorithm.","authors":"Taku Miyazawa, Keisuke Kubota, Hiroki Hanawa, Keisuke Hirata, Tatsuya Endo, Tsutomu Fujino, Katsuya Onitsuka, Moeka Yokoyama, Naohiko Kanemura","doi":"10.1177/01617346251340322","DOIUrl":"10.1177/01617346251340322","url":null,"abstract":"<p><p>Ultrasound imaging is used to measure the muscle-tendon junction (MTJ) to investigate the mechanical properties of the tendon and the interaction of the muscle-tendon unit in vivo. Although the MTJ can be observed clearly in the resting state, accurate tracking of the MTJ is difficult during muscle contractions due to changes in its morphology. We devised a novel method using an algorithm that extracts and tracks multiple feature points in ultrasound images to automatically measure the MTJ that moves during muscle contraction. Instead of using a single reference image, multiple feature points are used to improve the tracking performance during the deformation of the MTJ. Subsequently, we experimentally evaluated the usefulness of this method. Tests were conducted on 20 healthy participants performing isometric maximal contractions, and ultrasound echo images of the medial gastrocnemius and Achilles tendon junctions were recorded. MTJ excursion was calculated using the developed multiple feature point algorithm and two conventional methods-multi-updating template-matching and modified Lucas-Kanade (LK)-based on automatic and manual analyses. The root mean square error (RMSE) was used to compare the results. The intraclass correlation coefficient (ICC) was used to evaluate the repeatability among examiners. RMSE was 1.57 ± 0.62 for the proposed algorithm and 2.18 ± 0.89 and 1.84 ± 1.13 for the conventional methods. The Bland-Altman plot showed that the proposed method exhibited a lower 95% confidence interval than the two conventional methods. Thus, the proposed algorithm had the smallest error. Furthermore, the ICC values were 0.96, 0.40, and 0.86 for the proposed algorithm, multi-updating template-matching, and the modified LK method, respectively. When tracking an MTJ excursion that flexibly changes its shape, the use of multiple feature points provides robust results and achieves tracking that approximates the manual analysis results.</p>","PeriodicalId":49401,"journal":{"name":"Ultrasonic Imaging","volume":" ","pages":"125-133"},"PeriodicalIF":2.5,"publicationDate":"2025-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144081268","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-07-01Epub Date: 2025-06-20DOI: 10.1177/01617346251330111
Wuyi Shen, Yuancheng Zhang, Haoyu Zhang, Hui Zhong, Mingxi Wan
B-line artifacts in lung ultrasound, pivotal for diagnosing pulmonary conditions, warrant automated recognition to enhance diagnostic accuracy. In this paper, a lung ultrasound B-line vertical artifact identification method based on radio frequency (RF) signal was proposed. B-line regions were distinguished from non-B-line regions by inputting multiple characteristic parameters into nonlinear support vector machine (SVM). Six characteristic parameters were evaluated, including permutation entropy, information entropy, kurtosis, skewness, Nakagami shape factor, and approximate entropy. Following the evaluation that demonstrated the performance differences in parameter recognition, Principal Component Analysis (PCA) was utilized to reduce the dimensionality to a four-dimensional feature set for input into a nonlinear Support Vector Machine (SVM) for classification purposes. Four types of experiments were conducted: a sponge with dripping water model, gelatin phantoms containing either glass beads or gelatin droplets, and in vivo experiments. By employing precise feature selection and analyzing scan lines rather than full images, this approach significantly reduced the dependency on large image datasets without compromising discriminative accuracy. The method exhibited performance comparable to contemporary image-based deep learning approaches, which, while highly effective, typically necessitate extensive data for training and require expert annotation of large datasets to establish ground truth. Owing to the optimized architecture of our model, efficient sample recognition was achieved, with the capability to process between 27,000 and 33,000 scan lines per second (resulting in a frame rate exceeding 100 FPS, with 256 scan lines per frame), thus supporting real-time analysis. The results demonstrate that the accuracy of the method to classify a scan line as belonging to a B-line region was up to 88%, with sensitivity reaching up to 90%, specificity up to 87%, and an F1-score up to 89%. This approach effectively reflects the performance of scan line classification pertinent to B-line identification. Our approach reduces the reliance on large annotated datasets, thereby streamlining the preprocessing phase.
{"title":"Automatic Detection of B-Lines in Lung Ultrasound Based on the Evaluation of Multiple Characteristic Parameters Using Raw RF Data.","authors":"Wuyi Shen, Yuancheng Zhang, Haoyu Zhang, Hui Zhong, Mingxi Wan","doi":"10.1177/01617346251330111","DOIUrl":"10.1177/01617346251330111","url":null,"abstract":"<p><p>B-line artifacts in lung ultrasound, pivotal for diagnosing pulmonary conditions, warrant automated recognition to enhance diagnostic accuracy. In this paper, a lung ultrasound B-line vertical artifact identification method based on radio frequency (RF) signal was proposed. B-line regions were distinguished from non-B-line regions by inputting multiple characteristic parameters into nonlinear support vector machine (SVM). Six characteristic parameters were evaluated, including permutation entropy, information entropy, kurtosis, skewness, Nakagami shape factor, and approximate entropy. Following the evaluation that demonstrated the performance differences in parameter recognition, Principal Component Analysis (PCA) was utilized to reduce the dimensionality to a four-dimensional feature set for input into a nonlinear Support Vector Machine (SVM) for classification purposes. Four types of experiments were conducted: a sponge with dripping water model, gelatin phantoms containing either glass beads or gelatin droplets, and in vivo experiments. By employing precise feature selection and analyzing scan lines rather than full images, this approach significantly reduced the dependency on large image datasets without compromising discriminative accuracy. The method exhibited performance comparable to contemporary image-based deep learning approaches, which, while highly effective, typically necessitate extensive data for training and require expert annotation of large datasets to establish ground truth. Owing to the optimized architecture of our model, efficient sample recognition was achieved, with the capability to process between 27,000 and 33,000 scan lines per second (resulting in a frame rate exceeding 100 FPS, with 256 scan lines per frame), thus supporting real-time analysis. The results demonstrate that the accuracy of the method to classify a scan line as belonging to a B-line region was up to 88%, with sensitivity reaching up to 90%, specificity up to 87%, and an F1-score up to 89%. This approach effectively reflects the performance of scan line classification pertinent to B-line identification. Our approach reduces the reliance on large annotated datasets, thereby streamlining the preprocessing phase.</p>","PeriodicalId":49401,"journal":{"name":"Ultrasonic Imaging","volume":" ","pages":"134-152"},"PeriodicalIF":2.5,"publicationDate":"2025-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144334263","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-07-01Epub Date: 2025-06-12DOI: 10.1177/01617346251334756
Ying Luo, Dongmei Lin, Hongyan Tian, Huixia Tang, Lei Yang, Jingxu Leng, Wei Jiang, Yi Hao
Ultrasound contrast agent is a medical reagent used for ultrasound contrast imaging. After more than 50 years of development, with the continuous deepening of research, the composition, physicochemical properties, functions and application directions of contrast agents have undergone significant changes. paper describes the generation, development stage, classification, main clinical applications and the latest technical progress of ultrasound contrast agents relatively comprehensively, reviews imaging principles, safety, tumor targeted imaging and treatment of UCA, introduces the classification of ultrasound contrast agents and their clinical applications in the whole body in detail, and compares the physical characteristics and diagnostic performance of SonoVue and Sonazoid, the two most widely used contrast agents in clinical practice. This paper reviews the types and clinical use of ultrasound contrast agents for those who may be new to the field.
{"title":"Classification and Clinical Application of Ultrasound Contrast Agents.","authors":"Ying Luo, Dongmei Lin, Hongyan Tian, Huixia Tang, Lei Yang, Jingxu Leng, Wei Jiang, Yi Hao","doi":"10.1177/01617346251334756","DOIUrl":"10.1177/01617346251334756","url":null,"abstract":"<p><p>Ultrasound contrast agent is a medical reagent used for ultrasound contrast imaging. After more than 50 years of development, with the continuous deepening of research, the composition, physicochemical properties, functions and application directions of contrast agents have undergone significant changes. paper describes the generation, development stage, classification, main clinical applications and the latest technical progress of ultrasound contrast agents relatively comprehensively, reviews imaging principles, safety, tumor targeted imaging and treatment of UCA, introduces the classification of ultrasound contrast agents and their clinical applications in the whole body in detail, and compares the physical characteristics and diagnostic performance of SonoVue and Sonazoid, the two most widely used contrast agents in clinical practice. This paper reviews the types and clinical use of ultrasound contrast agents for those who may be new to the field.</p>","PeriodicalId":49401,"journal":{"name":"Ultrasonic Imaging","volume":" ","pages":"153-167"},"PeriodicalIF":2.5,"publicationDate":"2025-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144276448","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
To explore the image features and the diagnostic value of contrast-enhanced ultrasound (CEUS) for ductal carcinoma in situ (DCIS) of the breast. A total of 96 female patients with a solitary and histologically proven DCIS were analyzed retrospectively, and 100 female cases of invasive ductal carcinoma (IDC) lesions were used as the control group. The Breast Imaging Reporting and Data System (BI-RADS) category of breast lesions was assessed according to conventional ultrasound features. The DCIS lesions were classified into mass type and non-mass type. The CEUS characteristics of these breast lesions were retrospectively analyzed qualitatively and quantitatively. The final gold standard was biopsy or surgery with histo-pathological examination. Comparing the ultrasound images of DCIS with that of IDC, there were significant differences in echo pattern, calcification morphology, and calcification distribution (p < .05 for all). There was a significant difference between DCIS and IDC in enhancement intensity, perfusion defects, peripheral high enhancement, intratumoral vessels, and arrival time (AT) (p < .05 for all). In the logistic multivariate regression analysis, two indicators linked with DCIS were recognized: perfusion defects (p = .002) and peripheral high enhancement (p < .001). In forecasting DCIS, the logistic regression equation resulted in an AUC of 0.689, a specificity of 0.720, and a sensitivity of 0.563. CEUS showed differences in enhancement characteristics between DCIS and IDC, with perfusion defects and peripheral high enhancement being associated with DCIS.
目的:探讨对比增强超声(CEUS)对乳腺导管原位癌(DCIS)的图像特征和诊断价值。回顾性分析了96例经组织学证实的单发乳腺导管原位癌(DCIS)女性患者,并以100例浸润性导管癌(IDC)女性病例作为对照组。根据常规超声特征评估乳腺病变的乳腺影像报告和数据系统(BI-RADS)类别。DCIS病变分为肿块型和非肿块型。对这些乳腺病变的 CEUS 特征进行了回顾性定性和定量分析。最终的金标准是活组织检查或组织病理学检查手术。比较 DCIS 与 IDC 的超声图像,两者在回声模式、钙化形态和钙化分布(p p p = .002)以及周围高增强(p p = .003)方面存在显著差异。
{"title":"Image Features and Diagnostic Value of Contrast-Enhanced Ultrasound for Ductal Carcinoma In Situ of the Breast: Preliminary Findings.","authors":"Weiwei Li, Yingyan Zhao, Xiaochun Fei, Ying Wu, Weiwei Zhan, Wei Zhou, Shujun Xia, Yanyan Song, Jianqiao Zhou","doi":"10.1177/01617346241292032","DOIUrl":"10.1177/01617346241292032","url":null,"abstract":"<p><p>To explore the image features and the diagnostic value of contrast-enhanced ultrasound (CEUS) for ductal carcinoma in situ (DCIS) of the breast. A total of 96 female patients with a solitary and histologically proven DCIS were analyzed retrospectively, and 100 female cases of invasive ductal carcinoma (IDC) lesions were used as the control group. The Breast Imaging Reporting and Data System (BI-RADS) category of breast lesions was assessed according to conventional ultrasound features. The DCIS lesions were classified into mass type and non-mass type. The CEUS characteristics of these breast lesions were retrospectively analyzed qualitatively and quantitatively. The final gold standard was biopsy or surgery with histo-pathological examination. Comparing the ultrasound images of DCIS with that of IDC, there were significant differences in echo pattern, calcification morphology, and calcification distribution (<i>p</i> < .05 for all). There was a significant difference between DCIS and IDC in enhancement intensity, perfusion defects, peripheral high enhancement, intratumoral vessels, and arrival time (AT) (<i>p</i> < .05 for all). In the logistic multivariate regression analysis, two indicators linked with DCIS were recognized: perfusion defects (<i>p</i> = .002) and peripheral high enhancement (<i>p</i> < .001). In forecasting DCIS, the logistic regression equation resulted in an AUC of 0.689, a specificity of 0.720, and a sensitivity of 0.563. CEUS showed differences in enhancement characteristics between DCIS and IDC, with perfusion defects and peripheral high enhancement being associated with DCIS.</p>","PeriodicalId":49401,"journal":{"name":"Ultrasonic Imaging","volume":" ","pages":"59-67"},"PeriodicalIF":2.5,"publicationDate":"2025-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142591965","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}