首页 > 最新文献

Journal of Medical Imaging最新文献

英文 中文
Synthetic multi-inversion time magnetic resonance images for visualization of subcortical structures. 用于皮质下结构可视化的合成多逆时间磁共振图像。
IF 1.7 Q3 RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING Pub Date : 2026-01-01 Epub Date: 2026-01-06 DOI: 10.1117/1.JMI.13.1.014002
Savannah P Hays, Lianrui Zuo, Anqi Feng, Yihao Liu, Blake E Dewey, Jiachen Zhuo, Ellen M Mowry, Scott D Newsome, Jerry L Prince, Aaron Carass

Purpose: Visualization of subcortical gray matter is essential in neuroscience and clinical practice, particularly for disease understanding and surgical planning. Although multi-inversion time (multi-TI) T 1 -weighted ( T 1 -w) magnetic resonance (MR) imaging improves visualization, it is only acquired in specific clinical settings and not available in common public MR datasets.

Approach: We present SyMTIC (synthetic multi-TI contrasts), a deep learning method that generates synthetic multi-TI images using routinely acquired T 1 -w, T 2 -weighted ( T 2 -w), and fluid-attenuated inversion recovery (FLAIR) images. Our approach combines image translation via deep neural networks with imaging physics to estimate longitudinal relaxation time ( T 1 ) and proton density ( ρ ) maps. These maps are then used to compute multi-TI images with arbitrary inversion times.

Results: SyMTIC was trained using paired magnetization prepared rapid acquisition with gradient echo (MPRAGE) and fast gray matter acquisition T1 inversion recovery (FGATIR) images along with T 2 -w and FLAIR images. It accurately synthesized multi-TI images from standard clinical inputs, achieving image quality comparable to that from explicitly acquired multi-TI data. The synthetic images, especially for TI values between 400 to 800 ms, enhanced visualization of subcortical structures and improved segmentation of thalamic nuclei.

Conclusion: SyMTIC enables robust generation of high-quality multi-TI images from routine MR contrasts. When paired with the HACA3 algorithm, it generalizes well to varied clinical datasets, including those without FLAIR or T 2 -w images and unknown parameters, offering a practical solution for improving brain MR image visualization and analysis.

目的:皮层下灰质的可视化在神经科学和临床实践中是必不可少的,特别是对于疾病的理解和手术计划。虽然多次反转时间(multi-TI) t1加权(t1 -w)磁共振(MR)成像改善了可视化,但它仅在特定的临床环境中获得,而在公共MR数据集中不可用。方法:我们提出了SyMTIC(合成多ti对比),这是一种深度学习方法,使用常规获取的t1 -w、t2加权(t2 -w)和流体衰减反演恢复(FLAIR)图像生成合成多ti图像。我们的方法将通过深度神经网络的图像平移与成像物理相结合,以估计纵向松弛时间(t1)和质子密度(ρ)图。然后使用这些映射来计算具有任意反转时间的多ti图像。结果:SyMTIC使用配对磁化制备的梯度回波快速采集(MPRAGE)和快速灰质采集T1反演恢复(FGATIR)图像以及t2 -w和FLAIR图像进行训练。它准确地合成了来自标准临床输入的多ti图像,实现了与明确获取的多ti数据相当的图像质量。合成图像增强了皮层下结构的可视化,改善了丘脑核的分割,特别是在400 ~ 800 ms之间。结论:SyMTIC能够从常规MR对比中生成高质量的多ti图像。当与HACA3算法配对时,它可以很好地推广到各种临床数据集,包括那些没有FLAIR或t2 -w图像和未知参数的数据集,为提高脑MR图像的可视化和分析提供了一个实用的解决方案。
{"title":"Synthetic multi-inversion time magnetic resonance images for visualization of subcortical structures.","authors":"Savannah P Hays, Lianrui Zuo, Anqi Feng, Yihao Liu, Blake E Dewey, Jiachen Zhuo, Ellen M Mowry, Scott D Newsome, Jerry L Prince, Aaron Carass","doi":"10.1117/1.JMI.13.1.014002","DOIUrl":"10.1117/1.JMI.13.1.014002","url":null,"abstract":"<p><strong>Purpose: </strong>Visualization of subcortical gray matter is essential in neuroscience and clinical practice, particularly for disease understanding and surgical planning. Although multi-inversion time (multi-TI) <math> <mrow><msub><mi>T</mi> <mn>1</mn></msub> </mrow> </math> -weighted ( <math> <mrow><msub><mi>T</mi> <mn>1</mn></msub> </mrow> </math> -w) magnetic resonance (MR) imaging improves visualization, it is only acquired in specific clinical settings and not available in common public MR datasets.</p><p><strong>Approach: </strong>We present SyMTIC (synthetic multi-TI contrasts), a deep learning method that generates synthetic multi-TI images using routinely acquired <math> <mrow><msub><mi>T</mi> <mn>1</mn></msub> </mrow> </math> -w, <math> <mrow><msub><mi>T</mi> <mn>2</mn></msub> </mrow> </math> -weighted ( <math> <mrow><msub><mi>T</mi> <mn>2</mn></msub> </mrow> </math> -w), and fluid-attenuated inversion recovery (FLAIR) images. Our approach combines image translation via deep neural networks with imaging physics to estimate longitudinal relaxation time ( <math><mrow><mi>T</mi> <mn>1</mn></mrow> </math> ) and proton density ( <math><mrow><mi>ρ</mi></mrow> </math> ) maps. These maps are then used to compute multi-TI images with arbitrary inversion times.</p><p><strong>Results: </strong>SyMTIC was trained using paired magnetization prepared rapid acquisition with gradient echo (MPRAGE) and fast gray matter acquisition T1 inversion recovery (FGATIR) images along with <math> <mrow><msub><mi>T</mi> <mn>2</mn></msub> </mrow> </math> -w and FLAIR images. It accurately synthesized multi-TI images from standard clinical inputs, achieving image quality comparable to that from explicitly acquired multi-TI data. The synthetic images, especially for TI values between 400 to 800 ms, enhanced visualization of subcortical structures and improved segmentation of thalamic nuclei.</p><p><strong>Conclusion: </strong>SyMTIC enables robust generation of high-quality multi-TI images from routine MR contrasts. When paired with the HACA3 algorithm, it generalizes well to varied clinical datasets, including those without FLAIR or <math> <mrow><msub><mi>T</mi> <mn>2</mn></msub> </mrow> </math> -w images and unknown parameters, offering a practical solution for improving brain MR image visualization and analysis.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"13 1","pages":"014002"},"PeriodicalIF":1.7,"publicationDate":"2026-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12770912/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145918841","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Ultrasound imaging using single-element biaxial beamforming. 超声成像使用单元件双轴波束成形。
IF 1.7 Q3 RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING Pub Date : 2026-01-01 Epub Date: 2025-12-23 DOI: 10.1117/1.JMI.13.1.017001
Nathan Meulenbroek, Laura Curiel, Adam Waspe, Samuel Pichardo

Purpose: Dynamic focusing of received ultrasound signals, or beamforming, is foundational for ultrasound imaging. Conventionally, it requires arrays of ultrasound sensors to estimate where sound came from using time-of-flight (TOF) measurements. We demonstrate passive beamforming with a single biaxial sensor and accurate passive acoustic mapping with two biaxial sensors using only direction of arrival (DOA) information.

Approach: We introduce two single-element biaxial beamforming algorithms and four biaxial image reconstruction algorithms for a two-element biaxial piezoceramic transducer array. Imaging of a hemispherical acoustic source is characterized in an acoustic scanning tank within the region - 30.29    mm x 29.94 mm and 50.11 mm z 90.45 mm relative to the center of the array. Imaging performance is contrasted with delay, sum, and integrate (DSAI) and delay, multiply, sum, and integrate (DMSAI) algorithms.

Results: Single-element biaxial beamforming can identify DOA with a median error (± interquartile range) of 0.36 ± 0.63    deg and median full-width half-prominence of 7.3 ± 8.6    deg . Using both array elements, DOA-only images demonstrate overall median localization error of 6.41 mm (lateral: 1.02 mm, axial: 5.85 mm, signal-to-noise ratio (SNR): 15.37) and DOA + TOF images demonstrate overall median error of 6.91 mm (lateral: 1.69 mm, axial: 6.11 mm, SNR: 18.37).

Conclusions: To the best of our knowledge, we provide the first demonstration of single-element beamforming using a single stationary piezoceramic and the first demonstration of passive ultrasound imaging without the use of TOF information. These results enable simpler, smaller, more cost-effective arrays for passive ultrasound imaging.

目的:接收超声信号的动态聚焦或波束形成是超声成像的基础。传统上,它需要超声波传感器阵列来估计声音来自何处,使用飞行时间(TOF)测量。我们演示了使用单个双轴传感器的被动波束形成和仅使用到达方向(DOA)信息的两个双轴传感器的精确被动声学映射。方法:针对两元双轴压电换能器阵列,介绍了两种单元双轴波束形成算法和四种双轴图像重建算法。在相对于阵列中心的- 30.29 mm≤x≤29.94 mm和50.11 mm≤z≤90.45 mm区域内,对半球形声源进行成像。对比了延迟、求和和积分(DSAI)算法和延迟、乘法、求和和积分(DMSAI)算法的成像性能。结果:单单元双轴波束形成识别DOA的中位误差(±四分位间距)为0.36±0.63°,中位全宽半凸度为7.3±8.6°。使用这两种阵列元素,仅DOA图像的总体中位数定位误差为6.41 mm(侧向:1.02 mm,轴向:5.85 mm,信噪比(SNR): 15.37), DOA + TOF图像的总体中位数定位误差为6.91 mm(侧向:1.69 mm,轴向:6.11 mm,信噪比:18.37)。结论:据我们所知,我们提供了第一个使用单个静止压电陶瓷的单元件波束形成演示,以及第一个不使用TOF信息的被动超声成像演示。这些结果使被动超声成像阵列更简单、更小、更具成本效益。
{"title":"Ultrasound imaging using single-element biaxial beamforming.","authors":"Nathan Meulenbroek, Laura Curiel, Adam Waspe, Samuel Pichardo","doi":"10.1117/1.JMI.13.1.017001","DOIUrl":"https://doi.org/10.1117/1.JMI.13.1.017001","url":null,"abstract":"<p><strong>Purpose: </strong>Dynamic focusing of received ultrasound signals, or beamforming, is foundational for ultrasound imaging. Conventionally, it requires arrays of ultrasound sensors to estimate where sound came from using time-of-flight (TOF) measurements. We demonstrate passive beamforming with a single biaxial sensor and accurate passive acoustic mapping with two biaxial sensors using only direction of arrival (DOA) information.</p><p><strong>Approach: </strong>We introduce two single-element biaxial beamforming algorithms and four biaxial image reconstruction algorithms for a two-element biaxial piezoceramic transducer array. Imaging of a hemispherical acoustic source is characterized in an acoustic scanning tank within the region <math><mrow><mo>-</mo> <mn>30.29</mn> <mtext>  </mtext> <mi>mm</mi></mrow> </math> <math><mrow><mo>≤</mo> <mi>x</mi> <mo>≤</mo></mrow> </math> 29.94 mm and 50.11 mm <math><mrow><mo>≤</mo> <mi>z</mi> <mo>≤</mo></mrow> </math> 90.45 mm relative to the center of the array. Imaging performance is contrasted with delay, sum, and integrate (DSAI) and delay, multiply, sum, and integrate (DMSAI) algorithms.</p><p><strong>Results: </strong>Single-element biaxial beamforming can identify DOA with a median error (± interquartile range) of <math><mrow><mn>0.36</mn> <mo>±</mo> <mn>0.63</mn> <mtext>  </mtext> <mi>deg</mi></mrow> </math> and median full-width half-prominence of <math><mrow><mn>7.3</mn> <mo>±</mo> <mn>8.6</mn> <mtext>  </mtext> <mi>deg</mi></mrow> </math> . Using both array elements, DOA-only images demonstrate overall median localization error of 6.41 mm (lateral: 1.02 mm, axial: 5.85 mm, signal-to-noise ratio (SNR): 15.37) and DOA + TOF images demonstrate overall median error of 6.91 mm (lateral: 1.69 mm, axial: 6.11 mm, SNR: 18.37).</p><p><strong>Conclusions: </strong>To the best of our knowledge, we provide the first demonstration of single-element beamforming using a single stationary piezoceramic and the first demonstration of passive ultrasound imaging without the use of TOF information. These results enable simpler, smaller, more cost-effective arrays for passive ultrasound imaging.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"13 1","pages":"017001"},"PeriodicalIF":1.7,"publicationDate":"2026-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12726554/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145828543","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Impact of menopause and age on breast density and background parenchymal enhancement in dynamic contrast-enhanced magnetic resonance imaging. 绝经和年龄对动态增强磁共振成像中乳腺密度和背景实质增强的影响。
IF 1.9 Q3 RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING Pub Date : 2025-11-01 Epub Date: 2025-03-11 DOI: 10.1117/1.JMI.12.S2.S22002
Grey Kuling, Jennifer D Brooks, Belinda Curpen, Ellen Warner, Anne L Martel

Purpose: Breast density (BD) and background parenchymal enhancement (BPE) are important imaging biomarkers for breast cancer (BC) risk. We aim to evaluate longitudinal changes in quantitative BD and BPE in high-risk women undergoing dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI), focusing on the effects of age and transition into menopause.

Approach: A retrospective cohort study analyzed 834 high-risk women undergoing breast DCE-MRI for screening between 2005 and 2020. Quantitative BD and BPE were derived using deep-learning segmentation. Linear mixed-effects models assessed longitudinal changes and the effects of age, menopausal status, weeks since the last menstrual period (LMP-wks), body mass index (BMI), and hormone replacement therapy (HRT) on these imaging biomarkers.

Results: BD decreased with age across all menopausal stages, whereas BPE declined with age in postmenopausal women but remained stable in premenopausal women. HRT elevated BPE in postmenopausal women. Perimenopausal women exhibited decreases in both BD and BPE during the menopausal transition, though cross-sectional age at menopause had no significant effect on either measure. Fibroglandular tissue was positively associated with BPE in perimenopausal women.

Conclusions: We highlight the dynamic impact of menopause on BD and BPE and correlate well with the known relationship between risk and age at menopause. These findings advance the understanding of imaging biomarkers in high-risk populations and may contribute to the development of improved risk assessment leading to personalized chemoprevention and BC screening recommendations.

目的:乳腺密度(BD)和背景实质增强(BPE)是乳腺癌(BC)风险的重要影像学生物标志物。我们的目的是评估接受动态对比增强磁共振成像(DCE-MRI)的高危女性定量BD和BPE的纵向变化,重点是年龄和进入更年期的影响。方法:一项回顾性队列研究分析了2005年至2020年间接受乳腺DCE-MRI筛查的834名高危女性。采用深度学习分割方法,得到定量的BD和BPE。线性混合效应模型评估了纵向变化和年龄、绝经状态、自最后一次月经以来的周数(LMP-wks)、体重指数(BMI)和激素替代疗法(HRT)对这些成像生物标志物的影响。结果:在所有绝经期,BPE随年龄的增长而下降,而绝经后妇女的BPE随年龄的增长而下降,而绝经前妇女的BPE保持稳定。绝经后妇女激素替代疗法升高BPE。围绝经期妇女在绝经过渡期间BD和BPE均下降,尽管绝经时的横断面年龄对这两项指标均无显著影响。围绝经期妇女纤维腺组织与BPE呈正相关。结论:我们强调了绝经对BD和BPE的动态影响,并与已知的风险与绝经年龄之间的关系密切相关。这些发现促进了对高危人群成像生物标志物的理解,并可能有助于改进风险评估,从而产生个性化的化学预防和BC筛查建议。
{"title":"Impact of menopause and age on breast density and background parenchymal enhancement in dynamic contrast-enhanced magnetic resonance imaging.","authors":"Grey Kuling, Jennifer D Brooks, Belinda Curpen, Ellen Warner, Anne L Martel","doi":"10.1117/1.JMI.12.S2.S22002","DOIUrl":"10.1117/1.JMI.12.S2.S22002","url":null,"abstract":"<p><strong>Purpose: </strong>Breast density (BD) and background parenchymal enhancement (BPE) are important imaging biomarkers for breast cancer (BC) risk. We aim to evaluate longitudinal changes in quantitative BD and BPE in high-risk women undergoing dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI), focusing on the effects of age and transition into menopause.</p><p><strong>Approach: </strong>A retrospective cohort study analyzed 834 high-risk women undergoing breast DCE-MRI for screening between 2005 and 2020. Quantitative BD and BPE were derived using deep-learning segmentation. Linear mixed-effects models assessed longitudinal changes and the effects of age, menopausal status, weeks since the last menstrual period (LMP-wks), body mass index (BMI), and hormone replacement therapy (HRT) on these imaging biomarkers.</p><p><strong>Results: </strong>BD decreased with age across all menopausal stages, whereas BPE declined with age in postmenopausal women but remained stable in premenopausal women. HRT elevated BPE in postmenopausal women. Perimenopausal women exhibited decreases in both BD and BPE during the menopausal transition, though cross-sectional age at menopause had no significant effect on either measure. Fibroglandular tissue was positively associated with BPE in perimenopausal women.</p><p><strong>Conclusions: </strong>We highlight the dynamic impact of menopause on BD and BPE and correlate well with the known relationship between risk and age at menopause. These findings advance the understanding of imaging biomarkers in high-risk populations and may contribute to the development of improved risk assessment leading to personalized chemoprevention and BC screening recommendations.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"12 Suppl 2","pages":"S22002"},"PeriodicalIF":1.9,"publicationDate":"2025-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11894108/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143617600","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Breast tumor diagnosis via multimodal deep learning using ultrasound B-mode and Nakagami images. 基于b超和Nakagami图像的多模态深度学习诊断乳腺肿瘤。
IF 1.9 Q3 RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING Pub Date : 2025-11-01 Epub Date: 2025-05-14 DOI: 10.1117/1.JMI.12.S2.S22009
Sabiq Muhtadi, Caterina M Gallippi

Purpose: We propose and evaluate multimodal deep learning (DL) approaches that combine ultrasound (US) B-mode and Nakagami parametric images for breast tumor classification. It is hypothesized that integrating tissue brightness information from B-mode images with scattering properties from Nakagami images will enhance diagnostic performance compared with single-input approaches.

Approach: An EfficientNetV2B0 network was used to develop multimodal DL frameworks that took as input (i) numerical two-dimensional (2D) maps or (ii) rendered red-green-blue (RGB) representations of both B-mode and Nakagami data. The diagnostic performance of these frameworks was compared with single-input counterparts using 831 US acquisitions from 264 patients. In addition, gradient-weighted class activation mapping was applied to evaluate diagnostically relevant information utilized by the different networks.

Results: The multimodal architectures demonstrated significantly higher area under the receiver operating characteristic curve (AUC) values ( p < 0.05 ) than their monomodal counterparts, achieving an average improvement of 10.75%. In addition, the multimodal networks incorporated, on average, 15.70% more diagnostically relevant tissue information. Among the multimodal models, those using RGB representations as input outperformed those that utilized 2D numerical data maps ( p < 0.05 ). The top-performing multimodal architecture achieved a mean AUC of 0.896 [95% confidence interval (CI): 0.813 to 0.959] when performance was assessed at the image level and 0.848 (95% CI: 0.755 to 0.903) when assessed at the lesion level.

Conclusions: Incorporating B-mode and Nakagami information together in a multimodal DL framework improved classification outcomes and increased the amount of diagnostically relevant information accessed by networks, highlighting the potential for automating and standardizing US breast cancer diagnostics to enhance clinical outcomes.

目的:我们提出并评估结合超声(US) B-mode和Nakagami参数图像的多模态深度学习(DL)方法用于乳腺肿瘤分类。假设将来自b模式图像的组织亮度信息与来自Nakagami图像的散射特性相结合,与单一输入方法相比,将提高诊断性能。方法:使用EfficientNetV2B0网络开发多模式深度学习框架,该框架将(i)数值二维(2D)地图或(ii) b模式和Nakagami数据的红绿蓝(RGB)表示作为输入。使用来自264名患者的831份美国病历,将这些框架的诊断性能与单输入对照进行比较。此外,应用梯度加权类激活映射来评估不同网络利用的诊断相关信息。结果:与单模结构相比,多模结构的受者工作特征曲线(AUC)值下面积显著增加(p 0.05),平均改善10.75%。此外,多模式网络平均多纳入15.70%的诊断相关组织信息。在多模态模型中,使用RGB表示作为输入的模型优于使用2D数值数据图的模型(p 0.05)。表现最好的多模式架构在图像水平评估时的平均AUC为0.896[95%置信区间(CI): 0.813至0.959],在病变水平评估时的平均AUC为0.848 (95% CI: 0.755至0.903)。结论:将b模式和Nakagami信息结合在一个多模式DL框架中,可以改善分类结果,增加网络访问的诊断相关信息的数量,突出了美国乳腺癌诊断自动化和标准化的潜力,以提高临床结果。
{"title":"Breast tumor diagnosis via multimodal deep learning using ultrasound B-mode and Nakagami images.","authors":"Sabiq Muhtadi, Caterina M Gallippi","doi":"10.1117/1.JMI.12.S2.S22009","DOIUrl":"10.1117/1.JMI.12.S2.S22009","url":null,"abstract":"<p><strong>Purpose: </strong>We propose and evaluate multimodal deep learning (DL) approaches that combine ultrasound (US) B-mode and Nakagami parametric images for breast tumor classification. It is hypothesized that integrating tissue brightness information from B-mode images with scattering properties from Nakagami images will enhance diagnostic performance compared with single-input approaches.</p><p><strong>Approach: </strong>An EfficientNetV2B0 network was used to develop multimodal DL frameworks that took as input (i) numerical two-dimensional (2D) maps or (ii) rendered red-green-blue (RGB) representations of both B-mode and Nakagami data. The diagnostic performance of these frameworks was compared with single-input counterparts using 831 US acquisitions from 264 patients. In addition, gradient-weighted class activation mapping was applied to evaluate diagnostically relevant information utilized by the different networks.</p><p><strong>Results: </strong>The multimodal architectures demonstrated significantly higher area under the receiver operating characteristic curve (AUC) values ( <math><mrow><mi>p</mi> <mo><</mo> <mn>0.05</mn></mrow> </math> ) than their monomodal counterparts, achieving an average improvement of 10.75%. In addition, the multimodal networks incorporated, on average, 15.70% more diagnostically relevant tissue information. Among the multimodal models, those using RGB representations as input outperformed those that utilized 2D numerical data maps ( <math><mrow><mi>p</mi> <mo><</mo> <mn>0.05</mn></mrow> </math> ). The top-performing multimodal architecture achieved a mean AUC of 0.896 [95% confidence interval (CI): 0.813 to 0.959] when performance was assessed at the image level and 0.848 (95% CI: 0.755 to 0.903) when assessed at the lesion level.</p><p><strong>Conclusions: </strong>Incorporating B-mode and Nakagami information together in a multimodal DL framework improved classification outcomes and increased the amount of diagnostically relevant information accessed by networks, highlighting the potential for automating and standardizing US breast cancer diagnostics to enhance clinical outcomes.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"12 Suppl 2","pages":"S22009"},"PeriodicalIF":1.9,"publicationDate":"2025-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12077846/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144081335","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Sureness of classification of breast cancers as pure ductal carcinoma in situ or with invasive components on dynamic contrast-enhanced magnetic resonance imaging: application of likelihood assurance metrics for computer-aided diagnosis. 动态增强磁共振成像将乳腺癌分类为单纯导管原位癌或浸润性成分的确定性:可能性保证指标在计算机辅助诊断中的应用
IF 1.9 Q3 RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING Pub Date : 2025-11-01 Epub Date: 2025-06-18 DOI: 10.1117/1.JMI.12.S2.S22012
Heather M Whitney, Karen Drukker, Alexandra Edwards, Maryellen L Giger

Purpose: Breast cancer may persist within milk ducts (ductal carcinoma in situ, DCIS) or advance into surrounding breast tissue (invasive ductal carcinoma, IDC). Occasionally, invasiveness in cancer may be underestimated during biopsy, leading to adjustments in the treatment plan based on unexpected surgical findings. Artificial intelligence/computer-aided diagnosis (AI/CADx) techniques in medical imaging may have the potential to predict whether a lesion is purely DCIS or exhibits a mixture of IDC and DCIS components, serving as a valuable supplement to biopsy findings. To enhance the evaluation of AI/CADx performance, assessing variability on a lesion-by-lesion basis via likelihood assurance measures could add value.

Approach: We evaluated the performance in the task of distinguishing between pure DCIS and mixed IDC/DCIS breast cancers using computer-extracted radiomic features from dynamic contrast-enhanced magnetic resonance imaging using 0.632+ bootstrapping methods (2000 folds) on 550 lesions (135 pure DCIS, 415 mixed IDC/DCIS). Lesion-based likelihood assurance was measured using a sureness metric based on the 95% confidence interval of the classifier output for each lesion.

Results: The median and 95% CI of the 0.632+-corrected area under the receiver operating characteristic curve for the task of classifying lesions as pure DCIS or mixed IDC/DCIS were 0.81 [0.75, 0.86]. The sureness metric varied across the dataset with a range of 0.0002 (low sureness) to 0.96 (high sureness), with combinations of high and low classifier output and high and low sureness for some lesions.

Conclusions: Sureness metrics can provide additional insights into the ability of CADx algorithms to pre-operatively predict whether a lesion is invasive.

目的:乳腺癌可能持续存在于乳管内(导管原位癌,DCIS)或进展到周围乳腺组织(浸润性导管癌,IDC)。偶尔,肿瘤的侵袭性可能在活检中被低估,导致根据意外的手术结果调整治疗计划。医学成像中的人工智能/计算机辅助诊断(AI/CADx)技术可能有潜力预测病变是纯粹的DCIS还是表现为IDC和DCIS成分的混合,作为活检结果的有价值补充。为了加强对AI/CADx性能的评估,通过可能性保证措施来评估每个病变的可变性可以增加价值。方法:我们对550个病变(135个单纯DCIS, 415个混合IDC/DCIS)采用0.632+ bootstrapping方法(2000倍),使用计算机提取的动态对比增强磁共振成像放射特征来评估区分单纯DCIS和混合IDC/DCIS乳腺癌的性能。基于病变的可能性保证使用基于每个病变分类器输出的95%置信区间的可信度度量来测量。结果:将病变分类为单纯DCIS或混合IDC/DCIS的受试者工作特征曲线下0.632+校正区域的中位数和95% CI为0.81[0.75,0.86]。在整个数据集中,可信度度量的范围从0.0002(低可信度)到0.96(高可信度),分类器输出的高低和某些病变的高低可信度相结合。结论:确定性指标可以为CADx算法术前预测病变是否具有侵袭性的能力提供额外的见解。
{"title":"Sureness of classification of breast cancers as pure ductal carcinoma <i>in situ</i> or with invasive components on dynamic contrast-enhanced magnetic resonance imaging: application of likelihood assurance metrics for computer-aided diagnosis.","authors":"Heather M Whitney, Karen Drukker, Alexandra Edwards, Maryellen L Giger","doi":"10.1117/1.JMI.12.S2.S22012","DOIUrl":"10.1117/1.JMI.12.S2.S22012","url":null,"abstract":"<p><strong>Purpose: </strong>Breast cancer may persist within milk ducts (ductal carcinoma <i>in situ</i>, DCIS) or advance into surrounding breast tissue (invasive ductal carcinoma, IDC). Occasionally, invasiveness in cancer may be underestimated during biopsy, leading to adjustments in the treatment plan based on unexpected surgical findings. Artificial intelligence/computer-aided diagnosis (AI/CADx) techniques in medical imaging may have the potential to predict whether a lesion is purely DCIS or exhibits a mixture of IDC and DCIS components, serving as a valuable supplement to biopsy findings. To enhance the evaluation of AI/CADx performance, assessing variability on a lesion-by-lesion basis via likelihood assurance measures could add value.</p><p><strong>Approach: </strong>We evaluated the performance in the task of distinguishing between pure DCIS and mixed IDC/DCIS breast cancers using computer-extracted radiomic features from dynamic contrast-enhanced magnetic resonance imaging using 0.632+ bootstrapping methods (2000 folds) on 550 lesions (135 pure DCIS, 415 mixed IDC/DCIS). Lesion-based likelihood assurance was measured using a sureness metric based on the 95% confidence interval of the classifier output for each lesion.</p><p><strong>Results: </strong>The median and 95% CI of the 0.632+-corrected area under the receiver operating characteristic curve for the task of classifying lesions as pure DCIS or mixed IDC/DCIS were 0.81 [0.75, 0.86]. The sureness metric varied across the dataset with a range of 0.0002 (low sureness) to 0.96 (high sureness), with combinations of high and low classifier output and high and low sureness for some lesions.</p><p><strong>Conclusions: </strong>Sureness metrics can provide additional insights into the ability of CADx algorithms to pre-operatively predict whether a lesion is invasive.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"12 Suppl 2","pages":"S22012"},"PeriodicalIF":1.9,"publicationDate":"2025-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12175085/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144334195","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Semi-supervised semantic segmentation of cell nuclei with diffusion model and collaborative learning. 利用扩散模型和协作学习对细胞核进行半监督语义分割
IF 1.9 Q3 RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING Pub Date : 2025-11-01 Epub Date: 2025-03-20 DOI: 10.1117/1.JMI.12.6.061403
Zhuchen Shao, Sourya Sengupta, Mark A Anastasio, Hua Li

Purpose: Automated segmentation and classification of the cell nuclei in microscopic images is crucial for disease diagnosis and tissue microenvironment analysis. Given the difficulties in acquiring large labeled datasets for supervised learning, semi-supervised methods offer alternatives by utilizing unlabeled data alongside labeled data. Effective semi-supervised methods to address the challenges of extremely limited labeled data or diverse datasets with varying numbers and types of annotations remain under-explored.

Approach: Unlike other semi-supervised learning methods that iteratively use labeled and unlabeled data for model training, we introduce a semi-supervised learning framework that combines a latent diffusion model (LDM) with a transformer-based decoder, allowing for independent usage of unlabeled data to optimize their contribution to model training. The model is trained based on a sequential training strategy. LDM is trained in an unsupervised manner on diverse datasets, independent of cell nuclei types, thereby expanding the training data and enhancing training performance. The pre-trained LDM serves as a powerful feature extractor to support the transformer-based decoder's supervised training on limited labeled data and improve final segmentation performance. In addition, the paper explores a collaborative learning strategy to enhance segmentation performance on out-of-distribution (OOD) data.

Results: Extensive experiments conducted on four diverse datasets demonstrated that the proposed framework significantly outperformed other semi-supervised and supervised methods for both in-distribution and OOD cases. Through collaborative learning with supervised methods, diffusion model and transformer decoder-based segmentation (DTSeg) achieved consistent performance across varying cell types and different amounts of labeled data.

Conclusions: The proposed DTSeg framework addresses cell nuclei segmentation under limited labeled data by integrating unsupervised LDM training on diverse unlabeled datasets. Collaborative learning demonstrated effectiveness in enhancing the generalization capability of DTSeg to achieve superior results across diverse datasets and cases. Furthermore, the method supports multi-channel inputs and demonstrates strong generalization to both in-distribution and OOD scenarios.

目的:显微图像中细胞核的自动分割和分类对疾病诊断和组织微环境分析至关重要。鉴于获取大型标记数据集用于监督学习的困难,半监督方法通过利用未标记数据和标记数据提供了替代方案。有效的半监督方法来解决极其有限的标记数据或具有不同数量和类型注释的不同数据集的挑战仍然有待探索。方法:与其他迭代使用标记和未标记数据进行模型训练的半监督学习方法不同,我们引入了一种半监督学习框架,该框架将潜在扩散模型(LDM)与基于变压器的解码器相结合,允许独立使用未标记数据以优化其对模型训练的贡献。该模型基于顺序训练策略进行训练。LDM以无监督的方式在不同的数据集上进行训练,与细胞核类型无关,从而扩展了训练数据,提高了训练性能。预训练的LDM作为一个强大的特征提取器,支持基于变压器的解码器对有限标记数据的监督训练,提高最终的分割性能。此外,本文还探讨了一种协作学习策略来提高对离分布(OOD)数据的分割性能。结果:在四个不同的数据集上进行的大量实验表明,所提出的框架在分布和OOD情况下都明显优于其他半监督和监督方法。通过与监督方法的协作学习,扩散模型和基于转换器解码器的分割(DTSeg)在不同细胞类型和不同数量的标记数据中实现了一致的性能。结论:提出的DTSeg框架通过整合各种未标记数据集的无监督LDM训练,解决了有限标记数据下的细胞核分割问题。协作学习证明了在提高DTSeg泛化能力方面的有效性,从而在不同的数据集和案例中获得更好的结果。此外,该方法支持多通道输入,并显示出对分布和OOD场景的强泛化。
{"title":"Semi-supervised semantic segmentation of cell nuclei with diffusion model and collaborative learning.","authors":"Zhuchen Shao, Sourya Sengupta, Mark A Anastasio, Hua Li","doi":"10.1117/1.JMI.12.6.061403","DOIUrl":"10.1117/1.JMI.12.6.061403","url":null,"abstract":"<p><strong>Purpose: </strong>Automated segmentation and classification of the cell nuclei in microscopic images is crucial for disease diagnosis and tissue microenvironment analysis. Given the difficulties in acquiring large labeled datasets for supervised learning, semi-supervised methods offer alternatives by utilizing unlabeled data alongside labeled data. Effective semi-supervised methods to address the challenges of extremely limited labeled data or diverse datasets with varying numbers and types of annotations remain under-explored.</p><p><strong>Approach: </strong>Unlike other semi-supervised learning methods that iteratively use labeled and unlabeled data for model training, we introduce a semi-supervised learning framework that combines a latent diffusion model (LDM) with a transformer-based decoder, allowing for independent usage of unlabeled data to optimize their contribution to model training. The model is trained based on a sequential training strategy. LDM is trained in an unsupervised manner on diverse datasets, independent of cell nuclei types, thereby expanding the training data and enhancing training performance. The pre-trained LDM serves as a powerful feature extractor to support the transformer-based decoder's supervised training on limited labeled data and improve final segmentation performance. In addition, the paper explores a collaborative learning strategy to enhance segmentation performance on out-of-distribution (OOD) data.</p><p><strong>Results: </strong>Extensive experiments conducted on four diverse datasets demonstrated that the proposed framework significantly outperformed other semi-supervised and supervised methods for both in-distribution and OOD cases. Through collaborative learning with supervised methods, diffusion model and transformer decoder-based segmentation (DTSeg) achieved consistent performance across varying cell types and different amounts of labeled data.</p><p><strong>Conclusions: </strong>The proposed DTSeg framework addresses cell nuclei segmentation under limited labeled data by integrating unsupervised LDM training on diverse unlabeled datasets. Collaborative learning demonstrated effectiveness in enhancing the generalization capability of DTSeg to achieve superior results across diverse datasets and cases. Furthermore, the method supports multi-channel inputs and demonstrates strong generalization to both in-distribution and OOD scenarios.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"12 6","pages":"061403"},"PeriodicalIF":1.9,"publicationDate":"2025-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11924957/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143694064","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Benchmarking of deep learning methods for generic MRI multi-organ abdominal segmentation. 通用MRI多器官腹部分割中深度学习方法的标杆测试。
IF 1.7 Q3 RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING Pub Date : 2025-11-01 Epub Date: 2025-12-05 DOI: 10.1117/1.JMI.12.6.064503
Deepa Krishnaswamy, Cosmin Ciausu, Steve Pieper, Ron Kikinis, Benjamin Billot, Andrey Fedorov

Purpose: Recent advances in deep learning have led to robust automated tools for segmentation of abdominal computed tomography (CT). Meanwhile, segmentation of magnetic resonance imaging (MRI) is substantially more challenging due to the inherent signal variability and the increased effort required for annotating training datasets. Hence, existing approaches are trained on limited sets of MRI sequences, which might limit their generalizability.

Approach: To characterize the landscape of MRI abdominal segmentation tools, we present a comprehensive benchmarking of three state-of-the-art and open-source models: MRSegmentator, MRISegmentator-Abdomen, and TotalSegmentator MRI. As these models are trained using labor-intensive manual annotation cycles, we also introduce and evaluate ABDSynth, a SynthSeg-based model purely trained on widely available CT segmentations (no real images). We assess accuracy and generalizability by leveraging three public datasets (not seen by any of the evaluated methods during their training), which span all major manufacturers, five MRI sequences, as well as a variety of subject conditions, voxel resolutions, and fields-of-view.

Results: Our results reveal that MRSegmentator achieves the best performance and is most generalizable. By contrast, ABDSynth yields slightly less accurate results, but its relaxed requirements in training data make it an alternative when the annotation budget is limited.

Conclusions: We perform benchmarking of four open-source models for abdominal MR segmentation on three datasets and demonstrate that models trained on real, heterogeneous, multimodal data yield the best overall performance. We provide evaluation code and datasets for future benchmarking at https://github.com/deepakri201/AbdoBench.

目的:深度学习的最新进展为腹部计算机断层扫描(CT)的分割提供了强大的自动化工具。同时,由于固有的信号可变性和对训练数据集的注释需要增加的工作量,磁共振成像(MRI)的分割实质上更具挑战性。因此,现有的方法是在有限的MRI序列集上训练的,这可能限制了它们的通用性。方法:为了描述MRI腹部分割工具的前景,我们提出了三个最先进和开源模型的综合基准:MRSegmentator, mrissegmentator -腹部和TotalSegmentator MRI。由于这些模型是使用劳动密集型的手动注释周期来训练的,我们还介绍并评估了ABDSynth,这是一种基于synthseg的模型,纯粹是在广泛可用的CT分割(没有真实图像)上训练的。我们通过利用三个公共数据集(在任何评估方法的训练期间都没有看到)来评估准确性和泛化性,这些数据集涵盖所有主要制造商,五个MRI序列,以及各种主题条件,体素分辨率和视场。结果:我们的研究结果表明,MRSegmentator达到了最好的性能和最具通用性。相比之下,ABDSynth产生的结果稍微不那么准确,但是它对训练数据的要求比较宽松,这使得它在注释预算有限的情况下成为一种选择。结论:我们对四个开源模型在三个数据集上进行了腹部MR分割的基准测试,并证明了在真实、异构、多模态数据上训练的模型产生了最佳的整体性能。我们在https://github.com/deepakri201/AbdoBench上为未来的基准测试提供评估代码和数据集。
{"title":"Benchmarking of deep learning methods for generic MRI multi-organ abdominal segmentation.","authors":"Deepa Krishnaswamy, Cosmin Ciausu, Steve Pieper, Ron Kikinis, Benjamin Billot, Andrey Fedorov","doi":"10.1117/1.JMI.12.6.064503","DOIUrl":"https://doi.org/10.1117/1.JMI.12.6.064503","url":null,"abstract":"<p><strong>Purpose: </strong>Recent advances in deep learning have led to robust automated tools for segmentation of abdominal computed tomography (CT). Meanwhile, segmentation of magnetic resonance imaging (MRI) is substantially more challenging due to the inherent signal variability and the increased effort required for annotating training datasets. Hence, existing approaches are trained on limited sets of MRI sequences, which might limit their generalizability.</p><p><strong>Approach: </strong>To characterize the landscape of MRI abdominal segmentation tools, we present a comprehensive benchmarking of three state-of-the-art and open-source models: <i>MRSegmentator</i>, <i>MRISegmentator-Abdomen</i>, and <i>TotalSegmentator MRI</i>. As these models are trained using labor-intensive manual annotation cycles, we also introduce and evaluate <i>ABDSynth</i>, a SynthSeg-based model purely trained on widely available CT segmentations (no real images). We assess accuracy and generalizability by leveraging three public datasets (not seen by any of the evaluated methods during their training), which span all major manufacturers, five MRI sequences, as well as a variety of subject conditions, voxel resolutions, and fields-of-view.</p><p><strong>Results: </strong>Our results reveal that <i>MRSegmentator</i> achieves the best performance and is most generalizable. By contrast, <i>ABDSynth</i> yields slightly less accurate results, but its relaxed requirements in training data make it an alternative when the annotation budget is limited.</p><p><strong>Conclusions: </strong>We perform benchmarking of four open-source models for abdominal MR segmentation on three datasets and demonstrate that models trained on real, heterogeneous, multimodal data yield the best overall performance. We provide evaluation code and datasets for future benchmarking at https://github.com/deepakri201/AbdoBench.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"12 6","pages":"064503"},"PeriodicalIF":1.7,"publicationDate":"2025-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12680082/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145702382","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Scribble-supervised method for cardiac tissue segmentation using position and temporal contrastive information. 利用位置和时间对比信息进行心脏组织分割的潦草监督方法。
IF 1.7 Q3 RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING Pub Date : 2025-11-01 Epub Date: 2025-11-24 DOI: 10.1117/1.JMI.12.6.064002
Xiaoxuan Ma, Yingao Du, Kuncheng Lian

Purpose: Accurate pixel-level segmentation is essential for medical image analysis, particularly in assisting diagnosis and treatment planning. However, fully supervised learning methods rely heavily on high-quality annotated data, which are often scarce due to the high cost of manual labeling, privacy concerns, and limited availability. We aim to reduce reliance on precise annotations and improve segmentation performance under weak supervision.

Approach: We propose scribble position and temporal contrast learning (SPTCL), an innovative segmentation method that combines contrastive learning with weak supervision. Our method leverages the spatial continuity in 3D medical image volumes and the anatomical similarities across different cardiac phases to construct a contrastive learning task for robust feature representation from unlabeled data. To enhance the feature extraction capabilities, we employ a pre-trained encoder, which is initially trained on the ACDC dataset using contrastive learning to capture robust feature representations. This pre-trained encoder is then transferred to a weakly supervised segmentation network with a dual-branch decoder for further fine-tuning on the task. The predictions from both branches are fused to generate refined pseudo-labels, which are iteratively used to guide network training with only coarse scribble annotations.

Results: Experiments on the ACDC dataset show that SPTCL outperforms existing models, achieving a Dice coefficient of 90.5%, with a 2.5% improvement over the baseline and a 1.7% improvement over the latest model. Furthermore, SPTCL reduces training time by 33 % .

Conclusions: SPTCL effectively addresses the challenges of limited annotation in medical image segmentation by uniting contrastive learning with weak supervision. It demonstrates strong potential for practical deployment in clinical settings where high-quality labels are difficult to obtain.

目的:准确的像素级分割对医学图像分析至关重要,特别是在辅助诊断和治疗计划方面。然而,完全监督学习方法严重依赖于高质量的注释数据,由于人工标注的高成本、隐私问题和有限的可用性,这些数据通常是稀缺的。我们的目标是减少对精确标注的依赖,提高弱监督下的分割性能。方法:我们提出了一种将对比学习与弱监督相结合的创新分割方法——涂鸦位置与时间对比学习(SPTCL)。我们的方法利用三维医学图像体积的空间连续性和不同心脏阶段的解剖相似性来构建一个对比学习任务,从未标记的数据中获得鲁棒特征表示。为了增强特征提取能力,我们采用了一个预训练的编码器,该编码器最初在ACDC数据集上使用对比学习进行训练,以捕获鲁棒特征表示。然后将该预训练编码器转移到带有双分支解码器的弱监督分割网络中,对任务进行进一步微调。来自两个分支的预测被融合以生成精细的伪标签,这些伪标签迭代地用于指导网络训练,仅使用粗糙的潦草注释。结果:在ACDC数据集上的实验表明,SPTCL优于现有模型,Dice系数达到90.5%,比基线提高2.5%,比最新模型提高1.7%。此外,SPTCL将训练时间减少了约33%。结论:SPTCL将对比学习与弱监督相结合,有效解决了医学图像分割中标注有限的难题。它展示了在临床环境中实际部署的强大潜力,高质量的标签是难以获得的。
{"title":"Scribble-supervised method for cardiac tissue segmentation using position and temporal contrastive information.","authors":"Xiaoxuan Ma, Yingao Du, Kuncheng Lian","doi":"10.1117/1.JMI.12.6.064002","DOIUrl":"https://doi.org/10.1117/1.JMI.12.6.064002","url":null,"abstract":"<p><strong>Purpose: </strong>Accurate pixel-level segmentation is essential for medical image analysis, particularly in assisting diagnosis and treatment planning. However, fully supervised learning methods rely heavily on high-quality annotated data, which are often scarce due to the high cost of manual labeling, privacy concerns, and limited availability. We aim to reduce reliance on precise annotations and improve segmentation performance under weak supervision.</p><p><strong>Approach: </strong>We propose scribble position and temporal contrast learning (SPTCL), an innovative segmentation method that combines contrastive learning with weak supervision. Our method leverages the spatial continuity in 3D medical image volumes and the anatomical similarities across different cardiac phases to construct a contrastive learning task for robust feature representation from unlabeled data. To enhance the feature extraction capabilities, we employ a pre-trained encoder, which is initially trained on the ACDC dataset using contrastive learning to capture robust feature representations. This pre-trained encoder is then transferred to a weakly supervised segmentation network with a dual-branch decoder for further fine-tuning on the task. The predictions from both branches are fused to generate refined pseudo-labels, which are iteratively used to guide network training with only coarse scribble annotations.</p><p><strong>Results: </strong>Experiments on the ACDC dataset show that SPTCL outperforms existing models, achieving a Dice coefficient of 90.5%, with a 2.5% improvement over the baseline and a 1.7% improvement over the latest model. Furthermore, SPTCL reduces training time by <math><mrow><mo>∼</mo> <mn>33</mn> <mo>%</mo></mrow> </math> .</p><p><strong>Conclusions: </strong>SPTCL effectively addresses the challenges of limited annotation in medical image segmentation by uniting contrastive learning with weak supervision. It demonstrates strong potential for practical deployment in clinical settings where high-quality labels are difficult to obtain.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"12 6","pages":"064002"},"PeriodicalIF":1.7,"publicationDate":"2025-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12640760/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145596901","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Automated coronary calcium detection and scoring on multicenter, multiprotocol noncontrast CT. 多中心、多方案非对比CT自动冠脉钙化检测及评分。
IF 1.7 Q3 RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING Pub Date : 2025-11-01 Epub Date: 2025-11-13 DOI: 10.1117/1.JMI.12.6.064502
Andrew M Nguyen, Jianfei Liu, Tejas Sudharshan Mathai, Peter C Grayson, Perry J Pickhardt, Ronald M Summers

Purpose: Coronary artery disease is the leading global cause of mortality. Automated detection and scoring of calcified plaques can help cardiovascular risk assessment. We propose a deep learning method for automatic detection and scoring of coronary artery calcified plaques on noncontrast CT scans.

Approach: We utilized five datasets from one internal and four external tertiary care institutions, three of them with manually annotated plaques. A coronary artery calcified plaque detection model was developed using the state-of-the-art nnU-Net deep learning framework, incorporating simultaneous segmentation of the aorta, heart, and lungs to reduce false positives. The training data consisted of 641 noncontrast CT scans from three labeled datasets, representing diverse vascular disease etiologies. Agatston scores were automatically computed to quantify plaque burden. The model was tested on 160 labeled CT scans and compared with a previous detection method. In addition, Agatston scores were correlated with patient demographics and clinical outcomes using two unlabeled datasets.

Results: The predicted and reference Agatston scores demonstrated a strong correlation ( r 2 = 0.973 ), with a precision of 89.3%, recall of 89.1%, and an average Dice score of 75.0 ± 16.0 % on the labeled testing datasets. The stratified four Agatston groups achieved 92.0% accuracy and a Cohen's Kappa of 0.913. In the unlabeled datasets, Agatston groups showed significant correlations with the Framingham risk score, cardiovascular disease, heart failure, cancer status, fragility fracture risk, smoking, and age, whereas remaining consistent across race and scanner types.

Conclusions: Coronary artery plaques were accurately detected and segmented using the proposed nnU-Net-based method on noncontrast CT scans. The Agatston-score-based plaque burden assessment facilitates cardiovascular risk stratification, enabling opportunistic screening and population-based studies.

目的:冠状动脉疾病是全球主要的死亡原因。钙化斑块的自动检测和评分有助于心血管风险评估。我们提出了一种深度学习方法,用于在非对比CT扫描中自动检测和评分冠状动脉钙化斑块。方法:我们使用了来自一个内部和四个外部三级医疗机构的五个数据集,其中三个具有手动注释的斑块。使用最先进的nnU-Net深度学习框架开发了冠状动脉钙化斑块检测模型,结合主动脉,心脏和肺部的同时分割以减少假阳性。训练数据包括来自三个标记数据集的641个非对比CT扫描,代表了不同的血管疾病病因。自动计算Agatston评分来量化斑块负担。该模型在160个标记CT扫描上进行了测试,并与之前的检测方法进行了比较。此外,使用两个未标记的数据集,Agatston评分与患者人口统计学和临床结果相关。结果:预测Agatston评分与参考Agatston评分具有较强的相关性(r 2 = 0.973),在标记的测试数据集上,准确率为89.3%,召回率为89.1%,平均Dice评分为75.0±16.0%。分层的四个Agatston组的准确率为92.0%,Cohen’s Kappa为0.913。在未标记的数据集中,Agatston组显示出与Framingham风险评分、心血管疾病、心力衰竭、癌症状态、脆性骨折风险、吸烟和年龄的显著相关性,而在种族和扫描仪类型之间保持一致。结论:本文提出的基于nnu - net的方法在非对比CT扫描中可以准确地检测和分割冠状动脉斑块。基于agatston评分的斑块负担评估有助于心血管风险分层,使机会筛选和基于人群的研究成为可能。
{"title":"Automated coronary calcium detection and scoring on multicenter, multiprotocol noncontrast CT.","authors":"Andrew M Nguyen, Jianfei Liu, Tejas Sudharshan Mathai, Peter C Grayson, Perry J Pickhardt, Ronald M Summers","doi":"10.1117/1.JMI.12.6.064502","DOIUrl":"https://doi.org/10.1117/1.JMI.12.6.064502","url":null,"abstract":"<p><strong>Purpose: </strong>Coronary artery disease is the leading global cause of mortality. Automated detection and scoring of calcified plaques can help cardiovascular risk assessment. We propose a deep learning method for automatic detection and scoring of coronary artery calcified plaques on noncontrast CT scans.</p><p><strong>Approach: </strong>We utilized five datasets from one internal and four external tertiary care institutions, three of them with manually annotated plaques. A coronary artery calcified plaque detection model was developed using the state-of-the-art nnU-Net deep learning framework, incorporating simultaneous segmentation of the aorta, heart, and lungs to reduce false positives. The training data consisted of 641 noncontrast CT scans from three labeled datasets, representing diverse vascular disease etiologies. Agatston scores were automatically computed to quantify plaque burden. The model was tested on 160 labeled CT scans and compared with a previous detection method. In addition, Agatston scores were correlated with patient demographics and clinical outcomes using two unlabeled datasets.</p><p><strong>Results: </strong>The predicted and reference Agatston scores demonstrated a strong correlation ( <math> <mrow><msup><mi>r</mi> <mn>2</mn></msup> <mo>=</mo> <mn>0.973</mn></mrow> </math> ), with a precision of 89.3%, recall of 89.1%, and an average Dice score of <math><mrow><mn>75.0</mn> <mo>±</mo> <mn>16.0</mn> <mo>%</mo></mrow> </math> on the labeled testing datasets. The stratified four Agatston groups achieved 92.0% accuracy and a Cohen's Kappa of 0.913. In the unlabeled datasets, Agatston groups showed significant correlations with the Framingham risk score, cardiovascular disease, heart failure, cancer status, fragility fracture risk, smoking, and age, whereas remaining consistent across race and scanner types.</p><p><strong>Conclusions: </strong>Coronary artery plaques were accurately detected and segmented using the proposed nnU-Net-based method on noncontrast CT scans. The Agatston-score-based plaque burden assessment facilitates cardiovascular risk stratification, enabling opportunistic screening and population-based studies.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"12 6","pages":"064502"},"PeriodicalIF":1.7,"publicationDate":"2025-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12614904/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145542952","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Longitudinal outcome prediction of prostate cancer patients on active surveillance using multiple instance learning. 多实例学习用于前列腺癌患者主动监测的纵向预后预测。
IF 1.7 Q3 RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING Pub Date : 2025-11-01 Epub Date: 2025-10-14 DOI: 10.1117/1.JMI.12.6.061408
Filip Winzell, Ida Arvidsson, Kalle Åström, Niels Christian Overgaard, Felicia-Elena Marginean, Athanasios Simoulis, Anders Bjartell, Agnieszka Krzyzanowska, Anders Heyden

Purpose: To avoid over-treatment of prostate cancer patients following screening for elevated prostate-specific antigen levels, keeping patients on active surveillance has been suggested as an alternative to radical treatment. This means recurring visits for patients with low-grade cancer to monitor progression. Our aim was to develop an artificial intelligence-based model that can identify high-risk patients in a cohort of prostate cancer patients on active surveillance.

Approach: We have developed a multiple instance learning-based framework for predicting the longitudinal outcomes for prostate cancer patients on active surveillance. Our models were trained only on whole-slide images with patient-level labels without using explicit Gleason grades. We employed the UNI-2 foundation model and the well-established attention-based multiple instance learning approach. We further evaluated our models by fitting Cox proportional hazards models and testing them on an external dataset.

Results: With this approach, we achieved an average area under the receiver operator characteristic curve of 0.958 (95% CI, 0.957 to 0.959). Fitting Cox models to the predicted probabilities achieved a C -index of 0.824 and a hazard ratio of 2.32. However, all models showed a large drop in performance when evaluated on an external dataset.

Conclusion: We show that avoiding Gleason grades is beneficial for longitudinal outcome prediction of prostate cancer. Our results suggest that benign prostate tissue contains prognostic information. However, before our models could be used clinically, much more work remains to improve the generalization.

目的:为了避免前列腺癌患者在筛查前列腺特异性抗原水平升高后过度治疗,建议对患者进行主动监测,作为根治性治疗的替代方案。这意味着对低级别癌症患者的反复访问,以监测进展。我们的目标是开发一种基于人工智能的模型,可以在主动监测的前列腺癌患者队列中识别高风险患者。方法:我们开发了一个基于多实例学习的框架,用于预测主动监测前列腺癌患者的纵向结果。我们的模型仅在带有患者级别标签的整张幻灯片图像上进行训练,没有使用明确的Gleason分级。我们采用UNI-2基础模型和完善的基于注意的多实例学习方法。我们通过拟合Cox比例风险模型并在外部数据集上进行测试来进一步评估我们的模型。结果:采用该方法,受试者特征曲线下的平均面积为0.958 (95% CI, 0.957 ~ 0.959)。Cox模型拟合预测概率的C指数为0.824,风险比为2.32。然而,当在外部数据集上进行评估时,所有模型的性能都出现了大幅下降。结论:我们表明避免格里森分级有利于前列腺癌的纵向预后预测。我们的结果提示良性前列腺组织包含预后信息。然而,在我们的模型可以用于临床之前,还有很多工作要做,以提高泛化。
{"title":"Longitudinal outcome prediction of prostate cancer patients on active surveillance using multiple instance learning.","authors":"Filip Winzell, Ida Arvidsson, Kalle Åström, Niels Christian Overgaard, Felicia-Elena Marginean, Athanasios Simoulis, Anders Bjartell, Agnieszka Krzyzanowska, Anders Heyden","doi":"10.1117/1.JMI.12.6.061408","DOIUrl":"https://doi.org/10.1117/1.JMI.12.6.061408","url":null,"abstract":"<p><strong>Purpose: </strong>To avoid over-treatment of prostate cancer patients following screening for elevated prostate-specific antigen levels, keeping patients on active surveillance has been suggested as an alternative to radical treatment. This means recurring visits for patients with low-grade cancer to monitor progression. Our aim was to develop an artificial intelligence-based model that can identify high-risk patients in a cohort of prostate cancer patients on active surveillance.</p><p><strong>Approach: </strong>We have developed a multiple instance learning-based framework for predicting the longitudinal outcomes for prostate cancer patients on active surveillance. Our models were trained only on whole-slide images with patient-level labels without using explicit Gleason grades. We employed the UNI-2 foundation model and the well-established attention-based multiple instance learning approach. We further evaluated our models by fitting Cox proportional hazards models and testing them on an external dataset.</p><p><strong>Results: </strong>With this approach, we achieved an average area under the receiver operator characteristic curve of 0.958 (95% CI, 0.957 to 0.959). Fitting Cox models to the predicted probabilities achieved a <math><mrow><mi>C</mi></mrow> </math> -index of 0.824 and a hazard ratio of 2.32. However, all models showed a large drop in performance when evaluated on an external dataset.</p><p><strong>Conclusion: </strong>We show that avoiding Gleason grades is beneficial for longitudinal outcome prediction of prostate cancer. Our results suggest that benign prostate tissue contains prognostic information. However, before our models could be used clinically, much more work remains to improve the generalization.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"12 6","pages":"061408"},"PeriodicalIF":1.7,"publicationDate":"2025-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12518054/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145304258","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Journal of Medical Imaging
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1