首页 > 最新文献

Journal of Medical Imaging最新文献

英文 中文
Fine-grained multiclass nuclei segmentation with molecular empowered all-in-SAM model. 细粒度多类核分割与分子赋能的all-in-SAM模型。
IF 1.7 Q3 RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING Pub Date : 2025-09-01 Epub Date: 2025-09-04 DOI: 10.1117/1.JMI.12.5.057501
Xueyuan Li, Can Cui, Ruining Deng, Yucheng Tang, Quan Liu, Tianyuan Yao, Shunxing Bao, Naweed Chowdhury, Haichun Yang, Yuankai Huo

Purpose: Recent developments in computational pathology have been driven by advances in vision foundation models (VFMs), particularly the Segment Anything Model (SAM). This model facilitates nuclei segmentation through two primary methods: prompt-based zero-shot segmentation and the use of cell-specific SAM models for direct segmentation. These approaches enable effective segmentation across a range of nuclei and cells. However, general VFMs often face challenges with fine-grained semantic segmentation, such as identifying specific nuclei subtypes or particular cells.

Approach: In this paper, we propose the molecular empowered all-in-SAM model to advance computational pathology by leveraging the capabilities of VFMs. This model incorporates a full-stack approach, focusing on (1) annotation-engaging lay annotators through molecular empowered learning to reduce the need for detailed pixel-level annotations, (2) learning-adapting the SAM model to emphasize specific semantics, which utilizes its strong generalizability with SAM adapter, and (3) refinement-enhancing segmentation accuracy by integrating molecular oriented corrective learning.

Results: Experimental results from both in-house and public datasets show that the all-in-SAM model significantly improves cell classification performance, even when faced with varying annotation quality.

Conclusions: Our approach not only reduces the workload for annotators but also extends the accessibility of precise biomedical image analysis to resource-limited settings, thereby advancing medical diagnostics and automating pathology image analysis.

目的:计算病理学的最新发展是由视觉基础模型(VFMs)的进步推动的,特别是部分任何模型(SAM)。该模型通过两种主要方法促进细胞核分割:基于提示的零粒分割和使用细胞特异性SAM模型进行直接分割。这些方法能够在一系列细胞核和细胞中进行有效的分割。然而,一般的VFMs经常面临细粒度语义分割的挑战,例如识别特定的细胞核亚型或特定的细胞。方法:在本文中,我们提出了分子授权的all-in-SAM模型,通过利用VFMs的能力来推进计算病理学。该模型采用了全栈方法,重点关注(1)通过分子授权学习吸引注释者,以减少对详细像素级注释的需求;(2)学习-调整SAM模型以强调特定语义,利用SAM适配器的强泛化能力;(3)通过集成面向分子的校正学习来提高分割精度。结果:来自内部和公共数据集的实验结果表明,即使面对不同的注释质量,all-in-SAM模型也显著提高了细胞分类性能。结论:我们的方法不仅减少了注释者的工作量,而且将精确的生物医学图像分析扩展到资源有限的环境中,从而促进了医学诊断和病理图像分析的自动化。
{"title":"Fine-grained multiclass nuclei segmentation with molecular empowered all-in-SAM model.","authors":"Xueyuan Li, Can Cui, Ruining Deng, Yucheng Tang, Quan Liu, Tianyuan Yao, Shunxing Bao, Naweed Chowdhury, Haichun Yang, Yuankai Huo","doi":"10.1117/1.JMI.12.5.057501","DOIUrl":"10.1117/1.JMI.12.5.057501","url":null,"abstract":"<p><strong>Purpose: </strong>Recent developments in computational pathology have been driven by advances in vision foundation models (VFMs), particularly the Segment Anything Model (SAM). This model facilitates nuclei segmentation through two primary methods: prompt-based zero-shot segmentation and the use of cell-specific SAM models for direct segmentation. These approaches enable effective segmentation across a range of nuclei and cells. However, general VFMs often face challenges with fine-grained semantic segmentation, such as identifying specific nuclei subtypes or particular cells.</p><p><strong>Approach: </strong>In this paper, we propose the molecular empowered all-in-SAM model to advance computational pathology by leveraging the capabilities of VFMs. This model incorporates a full-stack approach, focusing on (1) annotation-engaging lay annotators through molecular empowered learning to reduce the need for detailed pixel-level annotations, (2) learning-adapting the SAM model to emphasize specific semantics, which utilizes its strong generalizability with SAM adapter, and (3) refinement-enhancing segmentation accuracy by integrating molecular oriented corrective learning.</p><p><strong>Results: </strong>Experimental results from both in-house and public datasets show that the all-in-SAM model significantly improves cell classification performance, even when faced with varying annotation quality.</p><p><strong>Conclusions: </strong>Our approach not only reduces the workload for annotators but also extends the accessibility of precise biomedical image analysis to resource-limited settings, thereby advancing medical diagnostics and automating pathology image analysis.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"12 5","pages":"057501"},"PeriodicalIF":1.7,"publicationDate":"2025-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12410749/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145015261","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
BigReg: an efficient registration pipeline for high-resolution X-ray and light-sheet fluorescence microscopy. BigReg:用于高分辨率x射线和光片荧光显微镜的高效配准管道。
IF 1.7 Q3 RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING Pub Date : 2025-09-01 Epub Date: 2025-10-06 DOI: 10.1117/1.JMI.12.5.054004
Siyuan Mei, Fuxin Fan, Mareike Thies, Mingxuan Gu, Fabian Wagner, Oliver Aust, Ina Erceg, Zeynab Mirzaei, Georgiana Neag, Yipeng Sun, Yixing Huang, Andreas Maier

Purpose: We aim to propose a reliable registration pipeline tailored for multimodal mouse bone imaging using X-ray microscopy (XRM) and light-sheet fluorescence microscopy (LSFM). These imaging modalities have emerged as pivotal tools in preclinical research, particularly for studying bone remodeling diseases such as osteoporosis. Although multimodal registration enables micrometer-level structural correspondence and facilitates functional analysis, conventional landmark-, feature-, or intensity-based approaches are often infeasible due to inconsistent signal characteristics and significant misalignment resulting from independent scanning, especially in real-world and reference-free scenarios.

Approach: To address these challenges, we introduce BigReg, an automatic, two-stage registration pipeline optimized for high-resolution XRM and LSFM volumes. The first stage involves extracting surface features and applying two successive global-to-local point-cloud-based methods for coarse alignment. The subsequent stage refines this alignment in the 3D Fourier domain using a modified cross-correlation technique, achieving precise volumetric registration.

Results: Evaluations using expert-annotated landmarks and augmented test data demonstrate that BigReg approaches the accuracy of landmark-based registration with a landmark distance (LMD) of 8.36 ± 0.12    μ m and a landmark fitness (LM fitness) of 85.71 % ± 1.02 % . Moreover, BigReg can provide an optimal initialization for mutual information-based methods that otherwise fail independently, further reducing LMD to 7.24 ± 0.11    μ m and increasing LM fitness to 93.90 % ± 0.77 % .

Conclusions: To the best of our knowledge, BigReg is the first automated method to successfully register XRM and LSFM volumes without requiring manual intervention or prior alignment cues. Its ability to accurately align fine-scale structures, such as lacunae in XRM and osteocytes in LSFM, opens up new avenues for quantitative, multimodal analysis of bone microarchitecture and disease pathology, particularly in studies of osteoporosis.

目的:我们的目标是为x射线显微镜(XRM)和光片荧光显微镜(LSFM)的多模态小鼠骨成像提供可靠的配准管道。这些成像方式已经成为临床前研究的关键工具,特别是研究骨质疏松症等骨重塑疾病。尽管多模态配准可以实现微米级的结构对应并促进功能分析,但由于信号特征不一致以及独立扫描导致的显著不对准,特别是在现实世界和无参考的情况下,传统的基于地标、特征或强度的方法往往是不可行的。方法:为了应对这些挑战,我们引入了BigReg,这是一种针对高分辨率XRM和LSFM卷进行优化的自动两阶段配准管道。第一阶段包括提取表面特征,并应用两个连续的基于全局到局部点云的方法进行粗对准。随后的阶段使用改进的互相关技术在三维傅里叶域中细化这种对齐,实现精确的体积配准。结果:使用专家标注的地标和增强的测试数据进行评估表明,BigReg的地标距离(LMD)为8.36±0.12 μ m,地标适应度(LM适应度)为85.71%±1.02%,接近基于地标的配准精度。此外,BigReg可以为基于互信息的方法提供最优初始化,进一步将LMD降低到7.24±0.11 μ m,将LM适应度提高到93.90%±0.77%。结论:据我们所知,bigregg是第一个在不需要人工干预或事先对齐提示的情况下成功注册XRM和LSFM卷的自动化方法。它能够精确对准精细结构,如XRM中的腔隙和LSFM中的骨细胞,为骨微结构和疾病病理学的定量、多模态分析开辟了新的途径,特别是在骨质疏松症的研究中。
{"title":"BigReg: an efficient registration pipeline for high-resolution X-ray and light-sheet fluorescence microscopy.","authors":"Siyuan Mei, Fuxin Fan, Mareike Thies, Mingxuan Gu, Fabian Wagner, Oliver Aust, Ina Erceg, Zeynab Mirzaei, Georgiana Neag, Yipeng Sun, Yixing Huang, Andreas Maier","doi":"10.1117/1.JMI.12.5.054004","DOIUrl":"https://doi.org/10.1117/1.JMI.12.5.054004","url":null,"abstract":"<p><strong>Purpose: </strong>We aim to propose a reliable registration pipeline tailored for multimodal mouse bone imaging using X-ray microscopy (XRM) and light-sheet fluorescence microscopy (LSFM). These imaging modalities have emerged as pivotal tools in preclinical research, particularly for studying bone remodeling diseases such as osteoporosis. Although multimodal registration enables micrometer-level structural correspondence and facilitates functional analysis, conventional landmark-, feature-, or intensity-based approaches are often infeasible due to inconsistent signal characteristics and significant misalignment resulting from independent scanning, especially in real-world and reference-free scenarios.</p><p><strong>Approach: </strong>To address these challenges, we introduce BigReg, an automatic, two-stage registration pipeline optimized for high-resolution XRM and LSFM volumes. The first stage involves extracting surface features and applying two successive global-to-local point-cloud-based methods for coarse alignment. The subsequent stage refines this alignment in the 3D Fourier domain using a modified cross-correlation technique, achieving precise volumetric registration.</p><p><strong>Results: </strong>Evaluations using expert-annotated landmarks and augmented test data demonstrate that BigReg approaches the accuracy of landmark-based registration with a landmark distance (LMD) of <math><mrow><mn>8.36</mn> <mo>±</mo> <mn>0.12</mn> <mtext>  </mtext> <mi>μ</mi> <mi>m</mi></mrow> </math> and a landmark fitness (LM fitness) of <math><mrow><mn>85.71</mn> <mo>%</mo> <mo>±</mo> <mn>1.02</mn> <mo>%</mo></mrow> </math> . Moreover, BigReg can provide an optimal initialization for mutual information-based methods that otherwise fail independently, further reducing LMD to <math><mrow><mn>7.24</mn> <mo>±</mo> <mn>0.11</mn> <mtext>  </mtext> <mi>μ</mi> <mi>m</mi></mrow> </math> and increasing LM fitness to <math><mrow><mn>93.90</mn> <mo>%</mo> <mo>±</mo> <mn>0.77</mn> <mo>%</mo></mrow> </math> .</p><p><strong>Conclusions: </strong>To the best of our knowledge, BigReg is the first automated method to successfully register XRM and LSFM volumes without requiring manual intervention or prior alignment cues. Its ability to accurately align fine-scale structures, such as lacunae in XRM and osteocytes in LSFM, opens up new avenues for quantitative, multimodal analysis of bone microarchitecture and disease pathology, particularly in studies of osteoporosis.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"12 5","pages":"054004"},"PeriodicalIF":1.7,"publicationDate":"2025-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12499931/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145245576","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Approximating the ideal observer for joint signal detection and estimation tasks by the use of Markov-Chain Monte Carlo with generative adversarial networks. 基于生成对抗网络的马尔可夫链蒙特卡罗逼近联合信号检测和估计任务的理想观测器。
IF 1.7 Q3 RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING Pub Date : 2025-09-01 Epub Date: 2025-10-21 DOI: 10.1117/1.JMI.12.5.051810
Dan Li, Kaiyan Li, Weimin Zhou, Mark A Anastasio

Purpose: The Bayesian ideal observer (IO) is a special model observer that achieves the best possible performance on tasks that involve signal detection or discrimination. Although IOs are desired for optimizing and assessing imaging technologies, they remain difficult to compute. Previously, a hybrid method that combines deep learning (DL) with a Markov-Chain Monte Carlo (MCMC) method was proposed for estimating the IO test statistic for joint signal detection-estimation tasks. That method will be referred to as the hybrid MCMC method. However, the hybrid MCMC method was restricted to use cases that involved relatively simple stochastic background and signal models.

Approach: The previously developed hybrid MCMC method is generalized by utilizing a framework that integrates deep generative modeling into the MCMC sampling process. This method employs a generative adversarial network (GAN) that is trained on object or signal ensembles to establish data-driven stochastic object and signal models, respectively, and will be referred to as the hybrid MCMC-GAN method. This circumvents the limitation of traditional MCMC methods and enables the estimation of the IO test statistic with consideration of broader classes of clinically relevant object and signal models.

Results: The hybrid MCMC-GAN method was evaluated on two binary detection-estimation tasks in which the observer must detect a signal and estimate its amplitude if the signal is detected. First, a stylized signal-known-statistically (SKS) and background-known-exactly task was considered. A GAN was employed to establish a stochastic signal model, enabling direct comparison of our GAN-based IO approximation with a closed-form expression for the IO decision strategy. The results confirmed that the proposed method could accurately approximate the performance of the true IO. Next, an SKS and background-known-statistically (BKS) task was considered. Here, a GAN was employed to establish a stochastic object model that described anatomical variability in an ensemble of magnetic resonance (MR) brain images. This represented a setting where traditional MCMC methods are inapplicable. In this study, although a reference estimate of the true IO performance was unavailable, the hybrid MCMC-GAN produced area under the estimation receiver operating characteristic curve (AEROC) estimates that exceeded those of a sub-ideal observer that represented a lower bound for the IO performance.

Conclusion: By combining GAN-based generative modeling with MCMC, the hybrid MCMC-GAN method extends a previously proposed IO approximation method to more general detection-estimation tasks. This provides a new capability to benchmark and optimize imaging-system performance through virtual imaging studies.

目的:贝叶斯理想观测器(IO)是一种特殊的模型观测器,它在涉及信号检测或识别的任务中达到最佳性能。尽管IOs被用于优化和评估成像技术,但它们仍然难以计算。以前,提出了一种将深度学习(DL)与马尔可夫链蒙特卡罗(MCMC)方法相结合的混合方法,用于估计联合信号检测-估计任务的IO测试统计量。该方法将被称为混合MCMC方法。然而,混合MCMC方法仅限于涉及相对简单的随机背景和信号模型的用例。方法:利用将深度生成建模集成到MCMC采样过程的框架,对先前开发的混合MCMC方法进行了推广。该方法采用生成式对抗网络(GAN),在对象或信号集合上进行训练,分别建立数据驱动的随机对象和信号模型,将其称为混合MCMC-GAN方法。这规避了传统MCMC方法的局限性,使IO测试统计量的估计能够考虑到更广泛的临床相关对象和信号模型。结果:混合MCMC-GAN方法在两个二元检测估计任务中进行了评估,其中观察者必须检测信号并在信号被检测到时估计其幅度。首先,考虑了一个程式化的统计已知信号(SKS)和背景确切已知任务。采用GAN建立随机信号模型,将基于GAN的IO近似与IO决策策略的封闭表达式进行直接比较。结果表明,该方法能够准确地逼近真实IO的性能。接下来,考虑了SKS和背景已知统计(BKS)任务。在这里,GAN被用来建立一个随机对象模型,该模型描述了磁共振(MR)脑图像集合中的解剖变异性。这代表了传统MCMC方法不适用的设置。在本研究中,虽然无法获得真实IO性能的参考估计,但混合MCMC-GAN在估计接收器工作特性曲线(AEROC)估计下产生的面积超过了代表IO性能下界的亚理想观察者。结论:通过将基于gan的生成建模与MCMC相结合,混合MCMC- gan方法将先前提出的IO近似方法扩展到更一般的检测估计任务。通过虚拟成像研究,这为基准测试和优化成像系统性能提供了新的能力。
{"title":"Approximating the ideal observer for joint signal detection and estimation tasks by the use of Markov-Chain Monte Carlo with generative adversarial networks.","authors":"Dan Li, Kaiyan Li, Weimin Zhou, Mark A Anastasio","doi":"10.1117/1.JMI.12.5.051810","DOIUrl":"10.1117/1.JMI.12.5.051810","url":null,"abstract":"<p><strong>Purpose: </strong>The Bayesian ideal observer (IO) is a special model observer that achieves the best possible performance on tasks that involve signal detection or discrimination. Although IOs are desired for optimizing and assessing imaging technologies, they remain difficult to compute. Previously, a hybrid method that combines deep learning (DL) with a Markov-Chain Monte Carlo (MCMC) method was proposed for estimating the IO test statistic for joint signal detection-estimation tasks. That method will be referred to as the hybrid MCMC method. However, the hybrid MCMC method was restricted to use cases that involved relatively simple stochastic background and signal models.</p><p><strong>Approach: </strong>The previously developed hybrid MCMC method is generalized by utilizing a framework that integrates deep generative modeling into the MCMC sampling process. This method employs a generative adversarial network (GAN) that is trained on object or signal ensembles to establish data-driven stochastic object and signal models, respectively, and will be referred to as the hybrid MCMC-GAN method. This circumvents the limitation of traditional MCMC methods and enables the estimation of the IO test statistic with consideration of broader classes of clinically relevant object and signal models.</p><p><strong>Results: </strong>The hybrid MCMC-GAN method was evaluated on two binary detection-estimation tasks in which the observer must detect a signal and estimate its amplitude if the signal is detected. First, a stylized signal-known-statistically (SKS) and background-known-exactly task was considered. A GAN was employed to establish a stochastic signal model, enabling direct comparison of our GAN-based IO approximation with a closed-form expression for the IO decision strategy. The results confirmed that the proposed method could accurately approximate the performance of the true IO. Next, an SKS and background-known-statistically (BKS) task was considered. Here, a GAN was employed to establish a stochastic object model that described anatomical variability in an ensemble of magnetic resonance (MR) brain images. This represented a setting where traditional MCMC methods are inapplicable. In this study, although a reference estimate of the true IO performance was unavailable, the hybrid MCMC-GAN produced area under the estimation receiver operating characteristic curve (AEROC) estimates that exceeded those of a sub-ideal observer that represented a lower bound for the IO performance.</p><p><strong>Conclusion: </strong>By combining GAN-based generative modeling with MCMC, the hybrid MCMC-GAN method extends a previously proposed IO approximation method to more general detection-estimation tasks. This provides a new capability to benchmark and optimize imaging-system performance through virtual imaging studies.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"12 5","pages":"051810"},"PeriodicalIF":1.7,"publicationDate":"2025-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12539792/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145349258","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Beyond the Victory Lap. 超越胜利圈。
IF 1.7 Q3 RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING Pub Date : 2025-09-01 Epub Date: 2025-10-24 DOI: 10.1117/1.JMI.12.5.050101
Bennett A Landman

The editorial introduces JMI Volume 12 Issue 5, as it marks the beginning of a new academic year with a celebration of innovation, mentorship, and the evolving role of scholarly publishing in medical imaging. It emphasizes that impactful research is not just about results, but about teaching, sharing insights, and advancing the field through curiosity, collaboration, and community-driven resources.

这篇社论介绍了JMI第12卷第5期,因为它标志着新学年的开始,庆祝创新、指导和学术出版在医学成像中的不断发展的作用。它强调,有影响力的研究不仅仅是关于结果,而是关于教学,分享见解,并通过好奇心,协作和社区驱动的资源推进该领域。
{"title":"Beyond the Victory Lap.","authors":"Bennett A Landman","doi":"10.1117/1.JMI.12.5.050101","DOIUrl":"https://doi.org/10.1117/1.JMI.12.5.050101","url":null,"abstract":"<p><p>The editorial introduces JMI Volume 12 Issue 5, as it marks the beginning of a new academic year with a celebration of innovation, mentorship, and the evolving role of scholarly publishing in medical imaging. It emphasizes that impactful research is not just about results, but about teaching, sharing insights, and advancing the field through curiosity, collaboration, and community-driven resources.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"12 5","pages":"050101"},"PeriodicalIF":1.7,"publicationDate":"2025-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12550604/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145379086","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
TFKT V2: task-focused knowledge transfer from natural images for computed tomography perceptual image quality assessment. TFKT V2:用于计算机断层扫描感知图像质量评估的以任务为中心的自然图像知识转移。
IF 1.9 Q3 RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING Pub Date : 2025-09-01 Epub Date: 2025-05-28 DOI: 10.1117/1.JMI.12.5.051805
Kazi Ramisa Rifa, Md Atik Ahamed, Jie Zhang, Abdullah Imran

Purpose: The accurate assessment of computed tomography (CT) image quality is crucial for ensuring diagnostic reliability while minimizing radiation dose. Radiologists' evaluations are time-consuming and labor-intensive. Existing automated approaches often require large CT datasets with predefined image quality assessment (IQA) scores, which often do not align well with clinical evaluations. We aim to develop a reference-free, automated method for CT IQA that closely reflects radiologists' evaluations, reducing the dependency on large annotated datasets.

Approach: We propose Task-Focused Knowledge Transfer (TFKT), a deep learning-based IQA method leveraging knowledge transfer from task-similar natural image datasets. TFKT incorporates a hybrid convolutional neural network-transformer model, enabling accurate quality predictions by learning from natural image distortions with human-annotated mean opinion scores. The model is pre-trained on natural image datasets and fine-tuned on low-dose computed tomography perceptual image quality assessment data to ensure task-specific adaptability.

Results: Extensive evaluations demonstrate that the proposed TFKT method effectively predicts IQA scores aligned with radiologists' assessments on in-domain datasets and generalizes well to out-of-domain clinical pediatric CT exams. The model achieves robust performance without requiring high-dose reference images. Our model is capable of assessing the quality of 30 CT image slices in a second.

Conclusions: The proposed TFKT approach provides a scalable, accurate, and reference-free solution for CT IQA. The model bridges the gap between traditional and deep learning-based IQA, offering clinically relevant and computationally efficient assessments applicable to real-world clinical settings.

目的:计算机断层扫描(CT)图像质量的准确评估是确保诊断可靠性,同时尽量减少辐射剂量的关键。放射科医生的评估既费时又费力。现有的自动化方法通常需要具有预定义图像质量评估(IQA)分数的大型CT数据集,这些数据通常与临床评估不太一致。我们的目标是开发一种无参考的、自动化的CT IQA方法,该方法密切反映放射科医生的评估,减少对大型注释数据集的依赖。方法:我们提出了以任务为中心的知识转移(TFKT),这是一种基于深度学习的IQA方法,利用任务相似的自然图像数据集的知识转移。TFKT结合了混合卷积神经网络变压器模型,通过学习自然图像失真和人工注释的平均意见得分,实现准确的质量预测。该模型在自然图像数据集上进行预训练,并在低剂量计算机断层扫描感知图像质量评估数据上进行微调,以确保任务特定的适应性。结果:广泛的评估表明,所提出的TFKT方法有效地预测了与放射科医生在域内数据集上的评估相一致的IQA分数,并很好地推广到域外的临床儿科CT检查。该模型在不需要高剂量参考图像的情况下实现了鲁棒性。我们的模型能够在一秒钟内评估约30个CT图像切片的质量。结论:提出的TFKT方法为CT IQA提供了一种可扩展、准确、无参考的解决方案。该模型弥合了传统和基于深度学习的IQA之间的差距,提供了适用于现实世界临床环境的临床相关和计算效率高的评估。
{"title":"TFKT V2: task-focused knowledge transfer from natural images for computed tomography perceptual image quality assessment.","authors":"Kazi Ramisa Rifa, Md Atik Ahamed, Jie Zhang, Abdullah Imran","doi":"10.1117/1.JMI.12.5.051805","DOIUrl":"10.1117/1.JMI.12.5.051805","url":null,"abstract":"<p><strong>Purpose: </strong>The accurate assessment of computed tomography (CT) image quality is crucial for ensuring diagnostic reliability while minimizing radiation dose. Radiologists' evaluations are time-consuming and labor-intensive. Existing automated approaches often require large CT datasets with predefined image quality assessment (IQA) scores, which often do not align well with clinical evaluations. We aim to develop a reference-free, automated method for CT IQA that closely reflects radiologists' evaluations, reducing the dependency on large annotated datasets.</p><p><strong>Approach: </strong>We propose Task-Focused Knowledge Transfer (TFKT), a deep learning-based IQA method leveraging knowledge transfer from task-similar natural image datasets. TFKT incorporates a hybrid convolutional neural network-transformer model, enabling accurate quality predictions by learning from natural image distortions with human-annotated mean opinion scores. The model is pre-trained on natural image datasets and fine-tuned on low-dose computed tomography perceptual image quality assessment data to ensure task-specific adaptability.</p><p><strong>Results: </strong>Extensive evaluations demonstrate that the proposed TFKT method effectively predicts IQA scores aligned with radiologists' assessments on in-domain datasets and generalizes well to out-of-domain clinical pediatric CT exams. The model achieves robust performance without requiring high-dose reference images. Our model is capable of assessing the quality of <math><mrow><mo>∼</mo> <mn>30</mn></mrow> </math> CT image slices in a second.</p><p><strong>Conclusions: </strong>The proposed TFKT approach provides a scalable, accurate, and reference-free solution for CT IQA. The model bridges the gap between traditional and deep learning-based IQA, offering clinically relevant and computationally efficient assessments applicable to real-world clinical settings.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"12 5","pages":"051805"},"PeriodicalIF":1.9,"publicationDate":"2025-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12116730/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144182165","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Full-head segmentation of MRI with abnormal brain anatomy: model and data release. 脑解剖异常的MRI全头分割:模型与数据发布。
IF 1.7 Q3 RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING Pub Date : 2025-09-01 Epub Date: 2025-09-17 DOI: 10.1117/1.JMI.12.5.054001
Andrew M Birnbaum, Adam Buchwald, Peter Turkeltaub, Adam Jacks, George Carr, Shreya Kannan, Yu Huang, Abhisheck Datta, Lucas C Parra, Lukas A Hirsch

Purpose: Our goal was to develop a deep network for whole-head segmentation, including clinical magnetic resonance imaging (MRI) with abnormal anatomy, and compile the first public benchmark dataset for this purpose. We collected 98 MRIs with volumetric segmentation labels for a diverse set of human subjects, including normal and abnormal anatomy in clinical cases of stroke and disorders of consciousness.

Approach: Training labels were generated by manually correcting initial automated segmentations for skin/scalp, skull, cerebro-spinal fluid, gray matter, white matter, air cavity, and extracephalic air. We developed a "MultiAxial" network consisting of three 2D U-Net that operate independently in sagittal, axial, and coronal planes, which are then combined to produce a single 3D segmentation.

Results: The MultiAxial network achieved a test-set Dice scores of 0.88 ± 0.04 (median ± interquartile range) on whole-head segmentation, including gray and white matter. This was compared with 0.86 ± 0.04 for Multipriors and 0.79 ± 0.10 for SPM12, two standard tools currently available for this task. The MultiAxial network gains in robustness by avoiding the need for coregistration with an atlas. It performed well in regions with abnormal anatomy and on images that have been de-identified. It enables more accurate and robust current flow modeling when incorporated into ROAST, a widely used modeling toolbox for transcranial electric stimulation.

Conclusions: We are releasing a new state-of-the-art tool for whole-head MRI segmentation in abnormal anatomy, along with the largest volume of labeled clinical head MRIs, including labels for nonbrain structures. Together, the model and data may serve as a benchmark for future efforts.

目的:我们的目标是开发一个用于全头部分割的深度网络,包括异常解剖的临床磁共振成像(MRI),并为此目的编制第一个公开的基准数据集。我们收集了98个具有体积分割标签的mri,用于不同的人类受试者,包括中风和意识障碍的临床病例中的正常和异常解剖。方法:通过手动校正皮肤/头皮、颅骨、脑脊液、灰质、白质、空腔和脑外空气的初始自动分割来生成训练标签。我们开发了一个“多轴”网络,由三个2D U-Net组成,它们分别在矢状面、轴状面和冠状面独立运行,然后将它们组合在一起产生一个单一的3D分割。结果:MultiAxial网络在包括灰质和白质在内的整个头部分割上的测试集Dice得分为0.88±0.04(中位数±四分位数范围)。相比之下,目前可用于该任务的两种标准工具Multipriors为0.86±0.04,SPM12为0.79±0.10。多轴网络通过避免与图谱的共配准而获得鲁棒性。它在解剖结构异常的区域和已经去识别的图像上表现良好。它可以更准确和强大的电流建模时,结合到ROAST,一个广泛使用的模型工具箱,经颅电刺激。结论:我们正在发布一种新的最先进的工具,用于异常解剖的全头部MRI分割,以及最大容量的标记临床头部MRI,包括非脑结构的标记。该模型和数据可以作为未来工作的基准。
{"title":"Full-head segmentation of MRI with abnormal brain anatomy: model and data release.","authors":"Andrew M Birnbaum, Adam Buchwald, Peter Turkeltaub, Adam Jacks, George Carr, Shreya Kannan, Yu Huang, Abhisheck Datta, Lucas C Parra, Lukas A Hirsch","doi":"10.1117/1.JMI.12.5.054001","DOIUrl":"10.1117/1.JMI.12.5.054001","url":null,"abstract":"<p><strong>Purpose: </strong>Our goal was to develop a deep network for whole-head segmentation, including clinical magnetic resonance imaging (MRI) with abnormal anatomy, and compile the first public benchmark dataset for this purpose. We collected 98 MRIs with volumetric segmentation labels for a diverse set of human subjects, including normal and abnormal anatomy in clinical cases of stroke and disorders of consciousness.</p><p><strong>Approach: </strong>Training labels were generated by manually correcting initial automated segmentations for skin/scalp, skull, cerebro-spinal fluid, gray matter, white matter, air cavity, and extracephalic air. We developed a \"MultiAxial\" network consisting of three 2D U-Net that operate independently in sagittal, axial, and coronal planes, which are then combined to produce a single 3D segmentation.</p><p><strong>Results: </strong>The MultiAxial network achieved a test-set Dice scores of <math><mrow><mn>0.88</mn> <mo>±</mo> <mn>0.04</mn></mrow> </math> (median ± interquartile range) on whole-head segmentation, including gray and white matter. This was compared with <math><mrow><mn>0.86</mn> <mo>±</mo> <mn>0.04</mn></mrow> </math> for Multipriors and <math><mrow><mn>0.79</mn> <mo>±</mo> <mn>0.10</mn></mrow> </math> for SPM12, two standard tools currently available for this task. The MultiAxial network gains in robustness by avoiding the need for coregistration with an atlas. It performed well in regions with abnormal anatomy and on images that have been de-identified. It enables more accurate and robust current flow modeling when incorporated into ROAST, a widely used modeling toolbox for transcranial electric stimulation.</p><p><strong>Conclusions: </strong>We are releasing a new state-of-the-art tool for whole-head MRI segmentation in abnormal anatomy, along with the largest volume of labeled clinical head MRIs, including labels for nonbrain structures. Together, the model and data may serve as a benchmark for future efforts.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"12 5","pages":"054001"},"PeriodicalIF":1.7,"publicationDate":"2025-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12442731/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145087827","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Joint CT reconstruction of anatomy and implants using a mixed prior model. 使用混合先验模型重建关节CT解剖和植入物。
IF 1.7 Q3 RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING Pub Date : 2025-09-01 Epub Date: 2025-10-18 DOI: 10.1117/1.JMI.12.5.053502
Xiao Jiang, Grace J Gang, J Webster Stayman

Purpose: Medical implants, often made of dense materials, pose significant challenges to accurate computed tomography (CT) reconstruction, especially near implants due to beam hardening and partial-volume artifacts. Moreover, diagnostics involving implants often require separate visualization for implants and anatomy. In this work, we propose a approach for joint estimation of anatomy and implants as separate volumes using a mixed prior model.

Approach: We leverage a learning-based prior for anatomy and a sparsity prior for implants to decouple the two volumes. In addition, a hybrid mono-polyenergetic forward model is employed to accommodate the spectral effects of implants, and a multiresolution object model is used to achieve high-resolution implant reconstruction. The reconstruction process alternates between diffusion posterior sampling for anatomy updates and classic optimization for implants and spectral coefficients.

Results: Evaluations were performed on emulated cardiac imaging with stent and spine imaging with pedicle screws. The structures of the cardiac stent with 0.25 mm wires were clearly visualized in the implant images, whereas the blooming artifacts around the stent were effectively suppressed in the anatomical reconstruction. For pedicle screws, the proposed algorithm mitigated streaking and beam-hardening artifacts in the anatomy volume, demonstrating significant improvements in SSIM and PSNR compared with frequency-splitting metal artifact reduction and model-based reconstruction on slices containing implants.

Conclusion: The proposed mixed prior model coupled with a hybrid spectral and multiresolution model can help to separate spatially and spectrally distinct objects that differ from anatomical features in single-energy CT, improving both image quality and separate visualization of implants and anatomy.

目的:医疗植入物通常由致密材料制成,对精确的计算机断层扫描(CT)重建构成重大挑战,特别是在植入物附近,由于束硬化和部分体积伪影。此外,涉及植入物的诊断通常需要对植入物和解剖结构进行单独的可视化。在这项工作中,我们提出了一种使用混合先验模型将解剖学和植入物作为单独体积进行联合估计的方法。方法:我们利用基于学习的解剖学先验和植入物的稀疏性先验来解耦两卷。此外,采用混合单多能正演模型来适应种植体的光谱效应,采用多分辨率目标模型来实现高分辨率的种植体重建。重建过程在解剖学更新的扩散后验采样和植入物和光谱系数的经典优化之间交替进行。结果:对支架模拟心脏成像和椎弓根螺钉模拟脊柱成像进行评价。0.25 mm金属丝心脏支架的结构在植入物图像上清晰可见,而在解剖重建中,支架周围的虚化伪影被有效抑制。对于椎弓根螺钉,所提出的算法减轻了解剖体积中的条纹和波束硬化伪影,与分频金属伪影减少和基于模型的含植入物切片重建相比,在SSIM和PSNR方面有了显著改善。结论:本文提出的混合先验模型与混合光谱和多分辨率模型相结合,有助于在单能量CT中分离出与解剖特征不同的空间和光谱上不同的物体,提高图像质量,并提高植入物和解剖的分离可视化。
{"title":"Joint CT reconstruction of anatomy and implants using a mixed prior model.","authors":"Xiao Jiang, Grace J Gang, J Webster Stayman","doi":"10.1117/1.JMI.12.5.053502","DOIUrl":"10.1117/1.JMI.12.5.053502","url":null,"abstract":"<p><strong>Purpose: </strong>Medical implants, often made of dense materials, pose significant challenges to accurate computed tomography (CT) reconstruction, especially near implants due to beam hardening and partial-volume artifacts. Moreover, diagnostics involving implants often require separate visualization for implants and anatomy. In this work, we propose a approach for joint estimation of anatomy and implants as separate volumes using a mixed prior model.</p><p><strong>Approach: </strong>We leverage a learning-based prior for anatomy and a sparsity prior for implants to decouple the two volumes. In addition, a hybrid mono-polyenergetic forward model is employed to accommodate the spectral effects of implants, and a multiresolution object model is used to achieve high-resolution implant reconstruction. The reconstruction process alternates between diffusion posterior sampling for anatomy updates and classic optimization for implants and spectral coefficients.</p><p><strong>Results: </strong>Evaluations were performed on emulated cardiac imaging with stent and spine imaging with pedicle screws. The structures of the cardiac stent with 0.25 mm wires were clearly visualized in the implant images, whereas the blooming artifacts around the stent were effectively suppressed in the anatomical reconstruction. For pedicle screws, the proposed algorithm mitigated streaking and beam-hardening artifacts in the anatomy volume, demonstrating significant improvements in SSIM and PSNR compared with frequency-splitting metal artifact reduction and model-based reconstruction on slices containing implants.</p><p><strong>Conclusion: </strong>The proposed mixed prior model coupled with a hybrid spectral and multiresolution model can help to separate spatially and spectrally distinct objects that differ from anatomical features in single-energy CT, improving both image quality and separate visualization of implants and anatomy.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"12 5","pages":"053502"},"PeriodicalIF":1.7,"publicationDate":"2025-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12537543/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145349286","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Segmentation variability and radiomics stability for predicting triple-negative breast cancer subtype using magnetic resonance imaging. 磁共振成像预测三阴性乳腺癌亚型的分割变异性和放射组学稳定性。
IF 1.7 Q3 RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING Pub Date : 2025-09-01 Epub Date: 2025-09-17 DOI: 10.1117/1.JMI.12.5.054501
Isabella Cama, Alejandro Guzmán, Cristina Campi, Michele Piana, Karim Lekadir, Sara Garbarino, Oliver Díaz

Purpose: Many studies caution against using radiomic features that are sensitive to contouring variability in predictive models for disease stratification. Consequently, metrics such as the intraclass correlation coefficient (ICC) are recommended to guide feature selection based on stability. However, the direct impact of segmentation variability on the performance of predictive models remains underexplored. We examine how segmentation variability affects both feature stability and predictive performance in the radiomics-based classification of triple-negative breast cancer (TNBC) using breast magnetic resonance imaging.

Approach: We analyzed 244 images from the Duke dataset, introducing segmentation variability through controlled modifications of manual segmentations. For each segmentation mask, explainable radiomic features were selected using Shapley Additive exPlanations and used to train logistic regression models. Feature stability across segmentations was assessed via ICC, Pearson's correlation, and reliability scores quantifying the relationship between segmentation variability and feature robustness.

Results: Model performances in predicting TNBC do not exhibit a significant difference across varying segmentations. The most explicative and predictive features exhibit decreasing ICC as segmentation accuracy decreases. However, their predictive power remains intact due to low ICC combined with high Pearson's correlation. No shared numerical relationship is found between feature stability and segmentation variability among the most predictive features.

Conclusions: Moderate segmentation variability has a limited impact on model performance. Although incorporating peritumoral information may reduce feature reproducibility, it does not compromise predictive utility. Notably, feature stability is not a strict prerequisite for predictive relevance, highlighting that exclusive reliance on ICC or stability metrics for feature selection may inadvertently discard informative features.

目的:许多研究警告不要在疾病分层预测模型中使用对轮廓变异性敏感的放射学特征。因此,建议使用诸如类内相关系数(ICC)之类的度量来指导基于稳定性的特征选择。然而,分割可变性对预测模型性能的直接影响仍未得到充分探讨。我们研究了在使用乳房磁共振成像的基于放射组学的三阴性乳腺癌(TNBC)分类中,分割变异性如何影响特征稳定性和预测性能。方法:我们分析了来自杜克大学数据集的244幅图像,通过对人工分割的可控修改引入了分割的可变性。对于每个分割掩模,使用Shapley加性解释选择可解释的放射学特征,并用于训练逻辑回归模型。通过ICC、Pearson相关和可靠性评分来评估分割变异性和特征鲁棒性之间的关系。结果:预测TNBC的模型性能在不同的分割中没有显着差异。随着分割精度的降低,最具解释性和预测性的特征呈现出降低的ICC。然而,由于低ICC结合高Pearson相关性,他们的预测能力仍然完好无损。在最具预测性的特征中,特征稳定性和分割可变性之间没有发现共享的数值关系。结论:中度分割可变性对模型性能的影响有限。虽然合并肿瘤周围信息可能会降低特征的再现性,但它不会损害预测效用。值得注意的是,特征稳定性并不是预测相关性的严格先决条件,强调在特征选择中完全依赖ICC或稳定性度量可能会无意中丢弃信息特征。
{"title":"Segmentation variability and radiomics stability for predicting triple-negative breast cancer subtype using magnetic resonance imaging.","authors":"Isabella Cama, Alejandro Guzmán, Cristina Campi, Michele Piana, Karim Lekadir, Sara Garbarino, Oliver Díaz","doi":"10.1117/1.JMI.12.5.054501","DOIUrl":"https://doi.org/10.1117/1.JMI.12.5.054501","url":null,"abstract":"<p><strong>Purpose: </strong>Many studies caution against using radiomic features that are sensitive to contouring variability in predictive models for disease stratification. Consequently, metrics such as the intraclass correlation coefficient (ICC) are recommended to guide feature selection based on stability. However, the direct impact of segmentation variability on the performance of predictive models remains underexplored. We examine how segmentation variability affects both feature stability and predictive performance in the radiomics-based classification of triple-negative breast cancer (TNBC) using breast magnetic resonance imaging.</p><p><strong>Approach: </strong>We analyzed 244 images from the Duke dataset, introducing segmentation variability through controlled modifications of manual segmentations. For each segmentation mask, explainable radiomic features were selected using Shapley Additive exPlanations and used to train logistic regression models. Feature stability across segmentations was assessed via ICC, Pearson's correlation, and reliability scores quantifying the relationship between segmentation variability and feature robustness.</p><p><strong>Results: </strong>Model performances in predicting TNBC do not exhibit a significant difference across varying segmentations. The most explicative and predictive features exhibit decreasing ICC as segmentation accuracy decreases. However, their predictive power remains intact due to low ICC combined with high Pearson's correlation. No shared numerical relationship is found between feature stability and segmentation variability among the most predictive features.</p><p><strong>Conclusions: </strong>Moderate segmentation variability has a limited impact on model performance. Although incorporating peritumoral information may reduce feature reproducibility, it does not compromise predictive utility. Notably, feature stability is not a strict prerequisite for predictive relevance, highlighting that exclusive reliance on ICC or stability metrics for feature selection may inadvertently discard informative features.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"12 5","pages":"054501"},"PeriodicalIF":1.7,"publicationDate":"2025-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12443385/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145087856","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Correlation of objective image quality metrics with radiologists' diagnostic confidence depends on the clinical task performed. 客观图像质量指标与放射科医生诊断信心的相关性取决于所执行的临床任务。
IF 1.9 Q3 RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING Pub Date : 2025-09-01 Epub Date: 2025-04-11 DOI: 10.1117/1.JMI.12.5.051803
Michelle C Pryde, James Rioux, Adela Elena Cora, David Volders, Matthias H Schmidt, Mohammed Abdolell, Chris Bowen, Steven D Beyea

Purpose: Objective image quality metrics (IQMs) are widely used as outcome measures to assess acquisition and reconstruction strategies for diagnostic images. For nonpathological magnetic resonance (MR) images, these IQMs correlate to varying degrees with expert radiologists' confidence scores of overall perceived diagnostic image quality. However, it is unclear whether IQMs also correlate with task-specific diagnostic image quality or expert radiologists' confidence in performing a specific diagnostic task, which calls into question their use as surrogates for radiologist opinion.

Approach: 0.5 T MR images from 16 stroke patients and two healthy volunteers were retrospectively undersampled ( R = 1 to 7 × ) and reconstructed via compressed sensing. Three neuroradiologists reported the presence/absence of acute ischemic stroke (AIS) and assigned a Fazekas score describing the extent of chronic ischemic lesion burden. Neuroradiologists ranked their confidence in performing each task using a 1 to 5 Likert scale. Confidence scores were correlated with noise quality measure, the visual information fidelity criterion, the feature similarity index, root mean square error, and structural similarity (SSIM) via nonlinear regression modeling.

Results: Although acceleration alters image quality, neuroradiologists remain able to report pathology. All of the IQMs tested correlated to some degree with diagnostic confidence for assessing chronic ischemic lesion burden, but none correlated with diagnostic confidence in diagnosing the presence/absence of AIS due to consistent radiologist performance regardless of image degradation.

Conclusions: Accelerated images were helpful for understanding the ability of IQMs to assess task-specific diagnostic image quality in the context of chronic ischemic lesion burden, although not in the case of AIS diagnosis. These findings suggest that commonly used IQMs, such as the SSIM index, do not necessarily indicate an image's utility when performing certain diagnostic tasks.

目的:客观图像质量指标(IQMs)被广泛用于评估诊断图像的采集和重建策略。对于非病理性磁共振(MR)图像,这些iqm与放射科专家对整体感知诊断图像质量的信心得分有不同程度的相关性。然而,目前尚不清楚iqm是否也与特定任务的诊断图像质量或专家放射科医生在执行特定诊断任务时的信心相关,这就使他们作为放射科医生意见的替代品的使用受到质疑。方法:对16例脑卒中患者和2名健康志愿者的0.5 T MR图像进行回顾性欠采样(R = 1 ~ 7 ×),并通过压缩感知进行重构。三名神经放射学家报告了急性缺血性卒中(AIS)的存在/不存在,并分配了描述慢性缺血性病变负担程度的Fazekas评分。神经放射学家用1到5的李克特量表对他们完成每项任务的信心进行排名。通过非线性回归建模,置信度得分与噪声质量度量、视觉信息保真度标准、特征相似度指数、均方根误差和结构相似度(SSIM)相关。结果:虽然加速改变图像质量,神经放射科医生仍然能够报告病理。所有测试的iqm都在一定程度上与评估慢性缺血性病变负担的诊断信心相关,但没有一个与诊断AIS存在与否的诊断信心相关,因为无论图像退化如何,放射科医生的表现都是一致的。结论:加速图像有助于理解IQMs在慢性缺血性病变负担背景下评估特定任务诊断图像质量的能力,尽管在AIS诊断情况下并非如此。这些发现表明,在执行某些诊断任务时,常用的iqm(如SSIM索引)不一定表明映像的实用性。
{"title":"Correlation of objective image quality metrics with radiologists' diagnostic confidence depends on the clinical task performed.","authors":"Michelle C Pryde, James Rioux, Adela Elena Cora, David Volders, Matthias H Schmidt, Mohammed Abdolell, Chris Bowen, Steven D Beyea","doi":"10.1117/1.JMI.12.5.051803","DOIUrl":"10.1117/1.JMI.12.5.051803","url":null,"abstract":"<p><strong>Purpose: </strong>Objective image quality metrics (IQMs) are widely used as outcome measures to assess acquisition and reconstruction strategies for diagnostic images. For nonpathological magnetic resonance (MR) images, these IQMs correlate to varying degrees with expert radiologists' confidence scores of overall perceived diagnostic image quality. However, it is unclear whether IQMs also correlate with task-specific diagnostic image quality or expert radiologists' confidence in performing a specific diagnostic task, which calls into question their use as surrogates for radiologist opinion.</p><p><strong>Approach: </strong>0.5 T MR images from 16 stroke patients and two healthy volunteers were retrospectively undersampled ( <math><mrow><mi>R</mi> <mo>=</mo> <mn>1</mn></mrow> </math> to <math><mrow><mn>7</mn> <mo>×</mo></mrow> </math> ) and reconstructed via compressed sensing. Three neuroradiologists reported the presence/absence of acute ischemic stroke (AIS) and assigned a Fazekas score describing the extent of chronic ischemic lesion burden. Neuroradiologists ranked their confidence in performing each task using a 1 to 5 Likert scale. Confidence scores were correlated with noise quality measure, the visual information fidelity criterion, the feature similarity index, root mean square error, and structural similarity (SSIM) via nonlinear regression modeling.</p><p><strong>Results: </strong>Although acceleration alters image quality, neuroradiologists remain able to report pathology. All of the IQMs tested correlated to some degree with diagnostic confidence for assessing chronic ischemic lesion burden, but none correlated with diagnostic confidence in diagnosing the presence/absence of AIS due to consistent radiologist performance regardless of image degradation.</p><p><strong>Conclusions: </strong>Accelerated images were helpful for understanding the ability of IQMs to assess task-specific diagnostic image quality in the context of chronic ischemic lesion burden, although not in the case of AIS diagnosis. These findings suggest that commonly used IQMs, such as the SSIM index, do not necessarily indicate an image's utility when performing certain diagnostic tasks.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"12 5","pages":"051803"},"PeriodicalIF":1.9,"publicationDate":"2025-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11991859/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144018546","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Contrast-enhanced spectral mammography demonstrates better inter-reader repeatability than digital mammography for screening breast cancer patients. 对比增强光谱乳房x线摄影显示更好的阅读器间重复性比数字乳房x线摄影筛查乳腺癌患者。
IF 1.9 Q3 RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING Pub Date : 2025-09-01 Epub Date: 2025-06-18 DOI: 10.1117/1.JMI.12.5.051806
Alisa Mohebbi, Ali Abdi, Saeed Mohammadzadeh, Mohammad Mirza-Aghazadeh-Attari, Ali Abbasian Ardakani, Afshin Mohammadi

Purpose: Our purpose is to assess the inter-rater agreement between digital mammography (DM) and contrast-enhanced spectral mammography (CESM) in evaluating the Breast Imaging Reporting and Data System (BI-RADS) grading.

Approach: This retrospective study included 326 patients recruited between January 2019 and February 2021. The study protocol was pre-registered on the Open Science Framework platform. Two expert radiologists interpreted the CESM and DM findings. Pathological data are used for radiologically suspicious or malignant-appearing lesions, whereas follow-up was considered the gold standard for benign-appearing lesions and breasts without lesions.

Results: For intra-device agreement, both imaging modalities showed "almost perfect" agreement, indicating that different radiologists are expected to report the same BI-RADS score for the same image. Despite showing a similar interpretation, a paired t -test showed significantly higher agreement for CESM compared with DM ( p < 0.001 ). Subgrouping based on the side or view did not show a considerable difference for both imaging modalities. For inter-device agreement, "almost perfect" agreement was also achieved. However, for proven malignant lesions, an overall higher BI-RADS score was achieved for CESM, whereas for benign or normal breasts, a lower BI-RADS score was reported, indicating a more precise BI-RADS classification for CESM compared with DM.

Conclusions: Our findings demonstrated strong agreement among readers regarding the identification of DM and CESM findings in breast images from various views. Moreover, it indicates that CESM is equally precise compared with DM and can be used as an alternative in clinical centers.

目的:我们的目的是评估数字乳房x线摄影(DM)和对比增强光谱乳房x线摄影(CESM)在评估乳腺成像报告和数据系统(BI-RADS)分级方面的一致性。方法:该回顾性研究纳入了2019年1月至2021年2月期间招募的326例患者。研究方案已在开放科学框架平台上预先注册。两位放射科专家解释了CESM和DM的结果。病理数据用于放射学上可疑或恶性病变,而随访被认为是良性病变和无病变乳房的金标准。结果:对于设备内一致性,两种成像方式显示“几乎完美”的一致性,这表明不同的放射科医生对相同的图像报告相同的BI-RADS评分。尽管显示了相似的解释,配对t检验显示CESM与DM的一致性显著更高(p 0.001)。基于侧面或视图的亚分组在两种成像方式中没有显示出相当大的差异。在设备间协议方面,也实现了“近乎完美”的协议。然而,对于已证实的恶性病变,CESM的BI-RADS评分总体较高,而对于良性或正常乳房,BI-RADS评分较低,这表明CESM的BI-RADS分类比DM更精确。结论:我们的研究结果表明,读者对从不同角度的乳房图像中识别DM和CESM的发现有强烈的共识。此外,这表明CESM与DM相比同样精确,可以作为临床中心的替代方案。
{"title":"Contrast-enhanced spectral mammography demonstrates better inter-reader repeatability than digital mammography for screening breast cancer patients.","authors":"Alisa Mohebbi, Ali Abdi, Saeed Mohammadzadeh, Mohammad Mirza-Aghazadeh-Attari, Ali Abbasian Ardakani, Afshin Mohammadi","doi":"10.1117/1.JMI.12.5.051806","DOIUrl":"10.1117/1.JMI.12.5.051806","url":null,"abstract":"<p><strong>Purpose: </strong>Our purpose is to assess the inter-rater agreement between digital mammography (DM) and contrast-enhanced spectral mammography (CESM) in evaluating the Breast Imaging Reporting and Data System (BI-RADS) grading.</p><p><strong>Approach: </strong>This retrospective study included 326 patients recruited between January 2019 and February 2021. The study protocol was pre-registered on the Open Science Framework platform. Two expert radiologists interpreted the CESM and DM findings. Pathological data are used for radiologically suspicious or malignant-appearing lesions, whereas follow-up was considered the gold standard for benign-appearing lesions and breasts without lesions.</p><p><strong>Results: </strong>For intra-device agreement, both imaging modalities showed \"almost perfect\" agreement, indicating that different radiologists are expected to report the same BI-RADS score for the same image. Despite showing a similar interpretation, a paired <math><mrow><mi>t</mi></mrow> </math> -test showed significantly higher agreement for CESM compared with DM ( <math><mrow><mi>p</mi> <mo><</mo> <mn>0.001</mn></mrow> </math> ). Subgrouping based on the side or view did not show a considerable difference for both imaging modalities. For inter-device agreement, \"almost perfect\" agreement was also achieved. However, for proven malignant lesions, an overall higher BI-RADS score was achieved for CESM, whereas for benign or normal breasts, a lower BI-RADS score was reported, indicating a more precise BI-RADS classification for CESM compared with DM.</p><p><strong>Conclusions: </strong>Our findings demonstrated strong agreement among readers regarding the identification of DM and CESM findings in breast images from various views. Moreover, it indicates that CESM is equally precise compared with DM and can be used as an alternative in clinical centers.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"12 5","pages":"051806"},"PeriodicalIF":1.9,"publicationDate":"2025-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12175086/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144334196","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Journal of Medical Imaging
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1