首页 > 最新文献

Journal of Medical Imaging最新文献

英文 中文
Federated learning in computational pathology: a literature review. 计算病理学中的联合学习:文献综述。
IF 1.7 Q3 RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING Pub Date : 2025-11-01 Epub Date: 2025-11-26 DOI: 10.1117/1.JMI.12.6.061412
Sonal Shukla, Scott Doyle
<p><strong>Purpose: </strong>Artificial intelligence has emerged as a powerful technique for data analysis and predictive modeling. However, traditional centralized learning methods, which require aggregating large and diverse datasets at a central location, present considerable privacy and security risks, particularly in sensitive areas such as healthcare. Federated learning (FL) offers a promising alternative by enabling collaborative model training without the need to share raw data. We aim to systematically examine the current state of the art in the application of FL within the healthcare domain, with a focus on computational pathology.</p><p><strong>Approach: </strong>We conducted a systematic review of the published literature on FL in healthcare, with a specific focus on imaging-based applications relevant to computational pathology. Our analysis includes studies utilizing a range of medical imaging modalities such as whole-slide histopathology images, magnetic resonance imaging, computed tomography, and positron emission tomography. The selected studies were categorized based on a taxonomy of FL architectures, with a focus on understanding each study's motivations, implementation strategies, and targeted problems. We also evaluated recurring technical challenges such as system and data heterogeneity, privacy preservation mechanisms, and communication efficiency, as well as the integration of complementary technologies such as blockchain, homomorphic encryption, and multi-modal learning.</p><p><strong>Results: </strong>The literature demonstrates a growing adoption of FL across healthcare applications, with increasing interest in computational pathology. Studies report promising outcomes for tasks such as patient outcome prediction, disease classification, and tissue segmentation using decentralized datasets. Notably, federated approaches often match or outperform centralized models in terms of accuracy while maintaining data privacy across institutions. In the case of computational pathology, federated training has proven feasible and effective. However, challenges persist across studies, including data modality heterogeneity, communication overhead, and slow model convergence. Several papers propose novel FL frameworks to address these issues, although standardization across implementations remains limited.</p><p><strong>Conclusions: </strong>FL holds significant promise for enabling secure, privacy-preserving collaboration in healthcare, particularly within computational pathology. The reviewed studies highlight the feasibility of applying FL across diverse data types without the need to centralize sensitive information. Nevertheless, key challenges such as system interoperability, data heterogeneity, and model interpretability continue to hinder real-world adoption. Future research should focus on developing scalable, standardized FL infrastructures, improving model robustness across heterogeneous sources, and addressing ethical conce
目的:人工智能已经成为一种强大的数据分析和预测建模技术。然而,传统的集中式学习方法需要在中心位置聚合大量不同的数据集,这带来了相当大的隐私和安全风险,特别是在医疗保健等敏感领域。联邦学习(FL)通过支持协作模型训练而无需共享原始数据,提供了一个很有前途的替代方案。我们的目标是系统地检查FL在医疗保健领域应用的最新技术,重点是计算病理学。方法:我们对医疗保健中FL的已发表文献进行了系统回顾,特别关注与计算病理学相关的基于成像的应用。我们的分析包括利用一系列医学成像方式的研究,如全玻片组织病理学图像、磁共振成像、计算机断层扫描和正电子发射断层扫描。所选择的研究是基于FL架构的分类进行分类的,重点是理解每个研究的动机、实现策略和目标问题。我们还评估了反复出现的技术挑战,如系统和数据异质性、隐私保护机制和通信效率,以及区块链、同态加密和多模态学习等互补技术的集成。结果:文献表明,随着对计算病理学的兴趣增加,在医疗保健应用中越来越多地采用FL。研究报告了使用分散数据集进行患者结果预测、疾病分类和组织分割等任务的有希望的结果。值得注意的是,在保持跨机构数据隐私的同时,联邦方法在准确性方面通常与集中式模型相匹配或优于集中式模型。在计算病理学的案例中,联合训练已被证明是可行和有效的。然而,在研究中仍然存在挑战,包括数据模态异构、通信开销和缓慢的模型收敛。一些论文提出了新的FL框架来解决这些问题,尽管跨实现的标准化仍然有限。结论:在医疗保健领域,特别是在计算病理学领域,FL在实现安全、保护隐私的协作方面有着重要的前景。回顾的研究强调了在不需要集中敏感信息的情况下跨不同数据类型应用FL的可行性。然而,诸如系统互操作性、数据异构性和模型可解释性等关键挑战继续阻碍着现实世界的采用。未来的研究应侧重于开发可扩展的、标准化的FL基础设施,提高跨异构来源的模型鲁棒性,并解决围绕公平性和问责制的伦理问题,以支持安全有效的临床整合。
{"title":"Federated learning in computational pathology: a literature review.","authors":"Sonal Shukla, Scott Doyle","doi":"10.1117/1.JMI.12.6.061412","DOIUrl":"https://doi.org/10.1117/1.JMI.12.6.061412","url":null,"abstract":"&lt;p&gt;&lt;strong&gt;Purpose: &lt;/strong&gt;Artificial intelligence has emerged as a powerful technique for data analysis and predictive modeling. However, traditional centralized learning methods, which require aggregating large and diverse datasets at a central location, present considerable privacy and security risks, particularly in sensitive areas such as healthcare. Federated learning (FL) offers a promising alternative by enabling collaborative model training without the need to share raw data. We aim to systematically examine the current state of the art in the application of FL within the healthcare domain, with a focus on computational pathology.&lt;/p&gt;&lt;p&gt;&lt;strong&gt;Approach: &lt;/strong&gt;We conducted a systematic review of the published literature on FL in healthcare, with a specific focus on imaging-based applications relevant to computational pathology. Our analysis includes studies utilizing a range of medical imaging modalities such as whole-slide histopathology images, magnetic resonance imaging, computed tomography, and positron emission tomography. The selected studies were categorized based on a taxonomy of FL architectures, with a focus on understanding each study's motivations, implementation strategies, and targeted problems. We also evaluated recurring technical challenges such as system and data heterogeneity, privacy preservation mechanisms, and communication efficiency, as well as the integration of complementary technologies such as blockchain, homomorphic encryption, and multi-modal learning.&lt;/p&gt;&lt;p&gt;&lt;strong&gt;Results: &lt;/strong&gt;The literature demonstrates a growing adoption of FL across healthcare applications, with increasing interest in computational pathology. Studies report promising outcomes for tasks such as patient outcome prediction, disease classification, and tissue segmentation using decentralized datasets. Notably, federated approaches often match or outperform centralized models in terms of accuracy while maintaining data privacy across institutions. In the case of computational pathology, federated training has proven feasible and effective. However, challenges persist across studies, including data modality heterogeneity, communication overhead, and slow model convergence. Several papers propose novel FL frameworks to address these issues, although standardization across implementations remains limited.&lt;/p&gt;&lt;p&gt;&lt;strong&gt;Conclusions: &lt;/strong&gt;FL holds significant promise for enabling secure, privacy-preserving collaboration in healthcare, particularly within computational pathology. The reviewed studies highlight the feasibility of applying FL across diverse data types without the need to centralize sensitive information. Nevertheless, key challenges such as system interoperability, data heterogeneity, and model interpretability continue to hinder real-world adoption. Future research should focus on developing scalable, standardized FL infrastructures, improving model robustness across heterogeneous sources, and addressing ethical conce","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"12 6","pages":"061412"},"PeriodicalIF":1.7,"publicationDate":"2025-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12649815/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145641377","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Fine-grained multiclass nuclei segmentation with molecular empowered all-in-SAM model. 细粒度多类核分割与分子赋能的all-in-SAM模型。
IF 1.7 Q3 RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING Pub Date : 2025-09-01 Epub Date: 2025-09-04 DOI: 10.1117/1.JMI.12.5.057501
Xueyuan Li, Can Cui, Ruining Deng, Yucheng Tang, Quan Liu, Tianyuan Yao, Shunxing Bao, Naweed Chowdhury, Haichun Yang, Yuankai Huo

Purpose: Recent developments in computational pathology have been driven by advances in vision foundation models (VFMs), particularly the Segment Anything Model (SAM). This model facilitates nuclei segmentation through two primary methods: prompt-based zero-shot segmentation and the use of cell-specific SAM models for direct segmentation. These approaches enable effective segmentation across a range of nuclei and cells. However, general VFMs often face challenges with fine-grained semantic segmentation, such as identifying specific nuclei subtypes or particular cells.

Approach: In this paper, we propose the molecular empowered all-in-SAM model to advance computational pathology by leveraging the capabilities of VFMs. This model incorporates a full-stack approach, focusing on (1) annotation-engaging lay annotators through molecular empowered learning to reduce the need for detailed pixel-level annotations, (2) learning-adapting the SAM model to emphasize specific semantics, which utilizes its strong generalizability with SAM adapter, and (3) refinement-enhancing segmentation accuracy by integrating molecular oriented corrective learning.

Results: Experimental results from both in-house and public datasets show that the all-in-SAM model significantly improves cell classification performance, even when faced with varying annotation quality.

Conclusions: Our approach not only reduces the workload for annotators but also extends the accessibility of precise biomedical image analysis to resource-limited settings, thereby advancing medical diagnostics and automating pathology image analysis.

目的:计算病理学的最新发展是由视觉基础模型(VFMs)的进步推动的,特别是部分任何模型(SAM)。该模型通过两种主要方法促进细胞核分割:基于提示的零粒分割和使用细胞特异性SAM模型进行直接分割。这些方法能够在一系列细胞核和细胞中进行有效的分割。然而,一般的VFMs经常面临细粒度语义分割的挑战,例如识别特定的细胞核亚型或特定的细胞。方法:在本文中,我们提出了分子授权的all-in-SAM模型,通过利用VFMs的能力来推进计算病理学。该模型采用了全栈方法,重点关注(1)通过分子授权学习吸引注释者,以减少对详细像素级注释的需求;(2)学习-调整SAM模型以强调特定语义,利用SAM适配器的强泛化能力;(3)通过集成面向分子的校正学习来提高分割精度。结果:来自内部和公共数据集的实验结果表明,即使面对不同的注释质量,all-in-SAM模型也显著提高了细胞分类性能。结论:我们的方法不仅减少了注释者的工作量,而且将精确的生物医学图像分析扩展到资源有限的环境中,从而促进了医学诊断和病理图像分析的自动化。
{"title":"Fine-grained multiclass nuclei segmentation with molecular empowered all-in-SAM model.","authors":"Xueyuan Li, Can Cui, Ruining Deng, Yucheng Tang, Quan Liu, Tianyuan Yao, Shunxing Bao, Naweed Chowdhury, Haichun Yang, Yuankai Huo","doi":"10.1117/1.JMI.12.5.057501","DOIUrl":"10.1117/1.JMI.12.5.057501","url":null,"abstract":"<p><strong>Purpose: </strong>Recent developments in computational pathology have been driven by advances in vision foundation models (VFMs), particularly the Segment Anything Model (SAM). This model facilitates nuclei segmentation through two primary methods: prompt-based zero-shot segmentation and the use of cell-specific SAM models for direct segmentation. These approaches enable effective segmentation across a range of nuclei and cells. However, general VFMs often face challenges with fine-grained semantic segmentation, such as identifying specific nuclei subtypes or particular cells.</p><p><strong>Approach: </strong>In this paper, we propose the molecular empowered all-in-SAM model to advance computational pathology by leveraging the capabilities of VFMs. This model incorporates a full-stack approach, focusing on (1) annotation-engaging lay annotators through molecular empowered learning to reduce the need for detailed pixel-level annotations, (2) learning-adapting the SAM model to emphasize specific semantics, which utilizes its strong generalizability with SAM adapter, and (3) refinement-enhancing segmentation accuracy by integrating molecular oriented corrective learning.</p><p><strong>Results: </strong>Experimental results from both in-house and public datasets show that the all-in-SAM model significantly improves cell classification performance, even when faced with varying annotation quality.</p><p><strong>Conclusions: </strong>Our approach not only reduces the workload for annotators but also extends the accessibility of precise biomedical image analysis to resource-limited settings, thereby advancing medical diagnostics and automating pathology image analysis.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"12 5","pages":"057501"},"PeriodicalIF":1.7,"publicationDate":"2025-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12410749/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145015261","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Comprehensive mixed reality surgical navigation system for liver surgery. 肝脏外科综合混合现实手术导航系统。
IF 1.7 Q3 RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING Pub Date : 2025-09-01 Epub Date: 2025-10-06 DOI: 10.1117/1.JMI.12.5.055001
Bowen Xiang, Jon S Heiselman, Michael I Miga

Purpose: Intraoperative liver deformation and the need to glance repeatedly between the operative field and a remote monitor undermine the precision and workflow of image-guided liver surgery. Existing mixed reality (MR) prototypes address only isolated aspects of this challenge and lack quantitative validation in deformable anatomy.

Approach: We introduce a fully self-contained MR navigation system for liver surgery that runs on a MR headset and bridges this clinical gap by (1) stabilizing holographic content with an external retro-reflective reference tool that defines a fixed world origin, (2) tracking instruments and surface points in real time with the headset's depth camera, and (3) compensating soft-tissue deformation through a weighted ICP + linearized iterative boundary reconstruction pipeline. A lightweight server-client architecture streams deformation-corrected 3D models to the headset and enables hands-free control via voice commands.

Results: Validation on a multistate liver-phantom protocol demonstrated that the reference tool reduced mean hologram drift from 4.0 ± 1.2    mm to 1.1 ± 0.3    mm and improved tracking accuracy from 3.6 ± 1.3 to 2.3 ± 0.8    mm . Across five simulated deformation states, nonrigid registration lowered surface target registration error from 7.4 ± 4.8 to 3.0 ± 2.7    mm -an average 57% error reduction-yielding sub-4 mm guidance accuracy.

Conclusions: By unifying stable MR visualization, tool tracking, and biomechanical deformation correction in a single headset, the proposed platform eliminates monitor-related context switching and restores spatial fidelity lost to liver motion. The device-agnostic framework is extendable to open approaches and potentially laparoscopic workflows and other soft-tissue interventions, marking a significant step toward MR-enabled surgical navigation.

目的:术中肝脏变形,需要在手术视野和远程监护仪之间反复扫视,影响了图像引导下肝脏手术的准确性和工作流程。现有的混合现实(MR)原型仅解决了这一挑战的孤立方面,并且在可变形解剖中缺乏定量验证。方法:我们介绍了一种完全独立的肝脏手术MR导航系统,该系统运行在MR头戴式设备上,通过(1)使用外部回溯反射参考工具稳定全息内容,定义固定的世界原点,(2)使用头戴式设备的深度相机实时跟踪仪器和表面点,以及(3)通过加权ICP +线性化迭代边界重建管道补偿软组织变形。轻量级的服务器-客户端架构将变形校正的3D模型传输到耳机,并通过语音命令实现免提控制。结果:在多状态肝幻影方案上的验证表明,参考工具将平均全息图漂移从4.0±1.2 mm减少到1.1±0.3 mm,并将跟踪精度从3.6±1.3提高到2.3±0.8 mm。在五种模拟变形状态下,非刚性配准将表面目标配准误差从7.4±4.8 mm降低到3.0±2.7 mm,平均误差降低57%,制导精度低于4 mm。结论:通过将稳定的MR可视化、工具跟踪和生物力学变形校正统一到一个头戴式设备中,该平台消除了与监视器相关的上下文切换,并恢复了肝脏运动丢失的空间保真度。与设备无关的框架可扩展到开放式方法,潜在的腹腔镜工作流程和其他软组织干预,标志着向磁共振手术导航迈出了重要的一步。
{"title":"Comprehensive mixed reality surgical navigation system for liver surgery.","authors":"Bowen Xiang, Jon S Heiselman, Michael I Miga","doi":"10.1117/1.JMI.12.5.055001","DOIUrl":"10.1117/1.JMI.12.5.055001","url":null,"abstract":"<p><strong>Purpose: </strong>Intraoperative liver deformation and the need to glance repeatedly between the operative field and a remote monitor undermine the precision and workflow of image-guided liver surgery. Existing mixed reality (MR) prototypes address only isolated aspects of this challenge and lack quantitative validation in deformable anatomy.</p><p><strong>Approach: </strong>We introduce a fully self-contained MR navigation system for liver surgery that runs on a MR headset and bridges this clinical gap by (1) stabilizing holographic content with an external retro-reflective reference tool that defines a fixed world origin, (2) tracking instruments and surface points in real time with the headset's depth camera, and (3) compensating soft-tissue deformation through a weighted ICP + linearized iterative boundary reconstruction pipeline. A lightweight server-client architecture streams deformation-corrected 3D models to the headset and enables hands-free control via voice commands.</p><p><strong>Results: </strong>Validation on a multistate liver-phantom protocol demonstrated that the reference tool reduced mean hologram drift from <math><mrow><mn>4.0</mn> <mo>±</mo> <mn>1.2</mn> <mtext>  </mtext> <mi>mm</mi></mrow> </math> to <math><mrow><mn>1.1</mn> <mo>±</mo> <mn>0.3</mn> <mtext>  </mtext> <mi>mm</mi></mrow> </math> and improved tracking accuracy from <math><mrow><mn>3.6</mn> <mo>±</mo> <mn>1.3</mn></mrow> </math> to <math><mrow><mn>2.3</mn> <mo>±</mo> <mn>0.8</mn> <mtext>  </mtext> <mi>mm</mi></mrow> </math> . Across five simulated deformation states, nonrigid registration lowered surface target registration error from <math><mrow><mn>7.4</mn> <mo>±</mo> <mn>4.8</mn></mrow> </math> to <math><mrow><mn>3.0</mn> <mo>±</mo> <mn>2.7</mn> <mtext>  </mtext> <mi>mm</mi></mrow> </math> -an average 57% error reduction-yielding sub-4 mm guidance accuracy.</p><p><strong>Conclusions: </strong>By unifying stable MR visualization, tool tracking, and biomechanical deformation correction in a single headset, the proposed platform eliminates monitor-related context switching and restores spatial fidelity lost to liver motion. The device-agnostic framework is extendable to open approaches and potentially laparoscopic workflows and other soft-tissue interventions, marking a significant step toward MR-enabled surgical navigation.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"12 5","pages":"055001"},"PeriodicalIF":1.7,"publicationDate":"2025-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12499930/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145245641","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
BigReg: an efficient registration pipeline for high-resolution X-ray and light-sheet fluorescence microscopy. BigReg:用于高分辨率x射线和光片荧光显微镜的高效配准管道。
IF 1.7 Q3 RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING Pub Date : 2025-09-01 Epub Date: 2025-10-06 DOI: 10.1117/1.JMI.12.5.054004
Siyuan Mei, Fuxin Fan, Mareike Thies, Mingxuan Gu, Fabian Wagner, Oliver Aust, Ina Erceg, Zeynab Mirzaei, Georgiana Neag, Yipeng Sun, Yixing Huang, Andreas Maier

Purpose: We aim to propose a reliable registration pipeline tailored for multimodal mouse bone imaging using X-ray microscopy (XRM) and light-sheet fluorescence microscopy (LSFM). These imaging modalities have emerged as pivotal tools in preclinical research, particularly for studying bone remodeling diseases such as osteoporosis. Although multimodal registration enables micrometer-level structural correspondence and facilitates functional analysis, conventional landmark-, feature-, or intensity-based approaches are often infeasible due to inconsistent signal characteristics and significant misalignment resulting from independent scanning, especially in real-world and reference-free scenarios.

Approach: To address these challenges, we introduce BigReg, an automatic, two-stage registration pipeline optimized for high-resolution XRM and LSFM volumes. The first stage involves extracting surface features and applying two successive global-to-local point-cloud-based methods for coarse alignment. The subsequent stage refines this alignment in the 3D Fourier domain using a modified cross-correlation technique, achieving precise volumetric registration.

Results: Evaluations using expert-annotated landmarks and augmented test data demonstrate that BigReg approaches the accuracy of landmark-based registration with a landmark distance (LMD) of 8.36 ± 0.12    μ m and a landmark fitness (LM fitness) of 85.71 % ± 1.02 % . Moreover, BigReg can provide an optimal initialization for mutual information-based methods that otherwise fail independently, further reducing LMD to 7.24 ± 0.11    μ m and increasing LM fitness to 93.90 % ± 0.77 % .

Conclusions: To the best of our knowledge, BigReg is the first automated method to successfully register XRM and LSFM volumes without requiring manual intervention or prior alignment cues. Its ability to accurately align fine-scale structures, such as lacunae in XRM and osteocytes in LSFM, opens up new avenues for quantitative, multimodal analysis of bone microarchitecture and disease pathology, particularly in studies of osteoporosis.

目的:我们的目标是为x射线显微镜(XRM)和光片荧光显微镜(LSFM)的多模态小鼠骨成像提供可靠的配准管道。这些成像方式已经成为临床前研究的关键工具,特别是研究骨质疏松症等骨重塑疾病。尽管多模态配准可以实现微米级的结构对应并促进功能分析,但由于信号特征不一致以及独立扫描导致的显著不对准,特别是在现实世界和无参考的情况下,传统的基于地标、特征或强度的方法往往是不可行的。方法:为了应对这些挑战,我们引入了BigReg,这是一种针对高分辨率XRM和LSFM卷进行优化的自动两阶段配准管道。第一阶段包括提取表面特征,并应用两个连续的基于全局到局部点云的方法进行粗对准。随后的阶段使用改进的互相关技术在三维傅里叶域中细化这种对齐,实现精确的体积配准。结果:使用专家标注的地标和增强的测试数据进行评估表明,BigReg的地标距离(LMD)为8.36±0.12 μ m,地标适应度(LM适应度)为85.71%±1.02%,接近基于地标的配准精度。此外,BigReg可以为基于互信息的方法提供最优初始化,进一步将LMD降低到7.24±0.11 μ m,将LM适应度提高到93.90%±0.77%。结论:据我们所知,bigregg是第一个在不需要人工干预或事先对齐提示的情况下成功注册XRM和LSFM卷的自动化方法。它能够精确对准精细结构,如XRM中的腔隙和LSFM中的骨细胞,为骨微结构和疾病病理学的定量、多模态分析开辟了新的途径,特别是在骨质疏松症的研究中。
{"title":"BigReg: an efficient registration pipeline for high-resolution X-ray and light-sheet fluorescence microscopy.","authors":"Siyuan Mei, Fuxin Fan, Mareike Thies, Mingxuan Gu, Fabian Wagner, Oliver Aust, Ina Erceg, Zeynab Mirzaei, Georgiana Neag, Yipeng Sun, Yixing Huang, Andreas Maier","doi":"10.1117/1.JMI.12.5.054004","DOIUrl":"https://doi.org/10.1117/1.JMI.12.5.054004","url":null,"abstract":"<p><strong>Purpose: </strong>We aim to propose a reliable registration pipeline tailored for multimodal mouse bone imaging using X-ray microscopy (XRM) and light-sheet fluorescence microscopy (LSFM). These imaging modalities have emerged as pivotal tools in preclinical research, particularly for studying bone remodeling diseases such as osteoporosis. Although multimodal registration enables micrometer-level structural correspondence and facilitates functional analysis, conventional landmark-, feature-, or intensity-based approaches are often infeasible due to inconsistent signal characteristics and significant misalignment resulting from independent scanning, especially in real-world and reference-free scenarios.</p><p><strong>Approach: </strong>To address these challenges, we introduce BigReg, an automatic, two-stage registration pipeline optimized for high-resolution XRM and LSFM volumes. The first stage involves extracting surface features and applying two successive global-to-local point-cloud-based methods for coarse alignment. The subsequent stage refines this alignment in the 3D Fourier domain using a modified cross-correlation technique, achieving precise volumetric registration.</p><p><strong>Results: </strong>Evaluations using expert-annotated landmarks and augmented test data demonstrate that BigReg approaches the accuracy of landmark-based registration with a landmark distance (LMD) of <math><mrow><mn>8.36</mn> <mo>±</mo> <mn>0.12</mn> <mtext>  </mtext> <mi>μ</mi> <mi>m</mi></mrow> </math> and a landmark fitness (LM fitness) of <math><mrow><mn>85.71</mn> <mo>%</mo> <mo>±</mo> <mn>1.02</mn> <mo>%</mo></mrow> </math> . Moreover, BigReg can provide an optimal initialization for mutual information-based methods that otherwise fail independently, further reducing LMD to <math><mrow><mn>7.24</mn> <mo>±</mo> <mn>0.11</mn> <mtext>  </mtext> <mi>μ</mi> <mi>m</mi></mrow> </math> and increasing LM fitness to <math><mrow><mn>93.90</mn> <mo>%</mo> <mo>±</mo> <mn>0.77</mn> <mo>%</mo></mrow> </math> .</p><p><strong>Conclusions: </strong>To the best of our knowledge, BigReg is the first automated method to successfully register XRM and LSFM volumes without requiring manual intervention or prior alignment cues. Its ability to accurately align fine-scale structures, such as lacunae in XRM and osteocytes in LSFM, opens up new avenues for quantitative, multimodal analysis of bone microarchitecture and disease pathology, particularly in studies of osteoporosis.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"12 5","pages":"054004"},"PeriodicalIF":1.7,"publicationDate":"2025-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12499931/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145245576","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Approximating the ideal observer for joint signal detection and estimation tasks by the use of Markov-Chain Monte Carlo with generative adversarial networks. 基于生成对抗网络的马尔可夫链蒙特卡罗逼近联合信号检测和估计任务的理想观测器。
IF 1.7 Q3 RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING Pub Date : 2025-09-01 Epub Date: 2025-10-21 DOI: 10.1117/1.JMI.12.5.051810
Dan Li, Kaiyan Li, Weimin Zhou, Mark A Anastasio

Purpose: The Bayesian ideal observer (IO) is a special model observer that achieves the best possible performance on tasks that involve signal detection or discrimination. Although IOs are desired for optimizing and assessing imaging technologies, they remain difficult to compute. Previously, a hybrid method that combines deep learning (DL) with a Markov-Chain Monte Carlo (MCMC) method was proposed for estimating the IO test statistic for joint signal detection-estimation tasks. That method will be referred to as the hybrid MCMC method. However, the hybrid MCMC method was restricted to use cases that involved relatively simple stochastic background and signal models.

Approach: The previously developed hybrid MCMC method is generalized by utilizing a framework that integrates deep generative modeling into the MCMC sampling process. This method employs a generative adversarial network (GAN) that is trained on object or signal ensembles to establish data-driven stochastic object and signal models, respectively, and will be referred to as the hybrid MCMC-GAN method. This circumvents the limitation of traditional MCMC methods and enables the estimation of the IO test statistic with consideration of broader classes of clinically relevant object and signal models.

Results: The hybrid MCMC-GAN method was evaluated on two binary detection-estimation tasks in which the observer must detect a signal and estimate its amplitude if the signal is detected. First, a stylized signal-known-statistically (SKS) and background-known-exactly task was considered. A GAN was employed to establish a stochastic signal model, enabling direct comparison of our GAN-based IO approximation with a closed-form expression for the IO decision strategy. The results confirmed that the proposed method could accurately approximate the performance of the true IO. Next, an SKS and background-known-statistically (BKS) task was considered. Here, a GAN was employed to establish a stochastic object model that described anatomical variability in an ensemble of magnetic resonance (MR) brain images. This represented a setting where traditional MCMC methods are inapplicable. In this study, although a reference estimate of the true IO performance was unavailable, the hybrid MCMC-GAN produced area under the estimation receiver operating characteristic curve (AEROC) estimates that exceeded those of a sub-ideal observer that represented a lower bound for the IO performance.

Conclusion: By combining GAN-based generative modeling with MCMC, the hybrid MCMC-GAN method extends a previously proposed IO approximation method to more general detection-estimation tasks. This provides a new capability to benchmark and optimize imaging-system performance through virtual imaging studies.

目的:贝叶斯理想观测器(IO)是一种特殊的模型观测器,它在涉及信号检测或识别的任务中达到最佳性能。尽管IOs被用于优化和评估成像技术,但它们仍然难以计算。以前,提出了一种将深度学习(DL)与马尔可夫链蒙特卡罗(MCMC)方法相结合的混合方法,用于估计联合信号检测-估计任务的IO测试统计量。该方法将被称为混合MCMC方法。然而,混合MCMC方法仅限于涉及相对简单的随机背景和信号模型的用例。方法:利用将深度生成建模集成到MCMC采样过程的框架,对先前开发的混合MCMC方法进行了推广。该方法采用生成式对抗网络(GAN),在对象或信号集合上进行训练,分别建立数据驱动的随机对象和信号模型,将其称为混合MCMC-GAN方法。这规避了传统MCMC方法的局限性,使IO测试统计量的估计能够考虑到更广泛的临床相关对象和信号模型。结果:混合MCMC-GAN方法在两个二元检测估计任务中进行了评估,其中观察者必须检测信号并在信号被检测到时估计其幅度。首先,考虑了一个程式化的统计已知信号(SKS)和背景确切已知任务。采用GAN建立随机信号模型,将基于GAN的IO近似与IO决策策略的封闭表达式进行直接比较。结果表明,该方法能够准确地逼近真实IO的性能。接下来,考虑了SKS和背景已知统计(BKS)任务。在这里,GAN被用来建立一个随机对象模型,该模型描述了磁共振(MR)脑图像集合中的解剖变异性。这代表了传统MCMC方法不适用的设置。在本研究中,虽然无法获得真实IO性能的参考估计,但混合MCMC-GAN在估计接收器工作特性曲线(AEROC)估计下产生的面积超过了代表IO性能下界的亚理想观察者。结论:通过将基于gan的生成建模与MCMC相结合,混合MCMC- gan方法将先前提出的IO近似方法扩展到更一般的检测估计任务。通过虚拟成像研究,这为基准测试和优化成像系统性能提供了新的能力。
{"title":"Approximating the ideal observer for joint signal detection and estimation tasks by the use of Markov-Chain Monte Carlo with generative adversarial networks.","authors":"Dan Li, Kaiyan Li, Weimin Zhou, Mark A Anastasio","doi":"10.1117/1.JMI.12.5.051810","DOIUrl":"10.1117/1.JMI.12.5.051810","url":null,"abstract":"<p><strong>Purpose: </strong>The Bayesian ideal observer (IO) is a special model observer that achieves the best possible performance on tasks that involve signal detection or discrimination. Although IOs are desired for optimizing and assessing imaging technologies, they remain difficult to compute. Previously, a hybrid method that combines deep learning (DL) with a Markov-Chain Monte Carlo (MCMC) method was proposed for estimating the IO test statistic for joint signal detection-estimation tasks. That method will be referred to as the hybrid MCMC method. However, the hybrid MCMC method was restricted to use cases that involved relatively simple stochastic background and signal models.</p><p><strong>Approach: </strong>The previously developed hybrid MCMC method is generalized by utilizing a framework that integrates deep generative modeling into the MCMC sampling process. This method employs a generative adversarial network (GAN) that is trained on object or signal ensembles to establish data-driven stochastic object and signal models, respectively, and will be referred to as the hybrid MCMC-GAN method. This circumvents the limitation of traditional MCMC methods and enables the estimation of the IO test statistic with consideration of broader classes of clinically relevant object and signal models.</p><p><strong>Results: </strong>The hybrid MCMC-GAN method was evaluated on two binary detection-estimation tasks in which the observer must detect a signal and estimate its amplitude if the signal is detected. First, a stylized signal-known-statistically (SKS) and background-known-exactly task was considered. A GAN was employed to establish a stochastic signal model, enabling direct comparison of our GAN-based IO approximation with a closed-form expression for the IO decision strategy. The results confirmed that the proposed method could accurately approximate the performance of the true IO. Next, an SKS and background-known-statistically (BKS) task was considered. Here, a GAN was employed to establish a stochastic object model that described anatomical variability in an ensemble of magnetic resonance (MR) brain images. This represented a setting where traditional MCMC methods are inapplicable. In this study, although a reference estimate of the true IO performance was unavailable, the hybrid MCMC-GAN produced area under the estimation receiver operating characteristic curve (AEROC) estimates that exceeded those of a sub-ideal observer that represented a lower bound for the IO performance.</p><p><strong>Conclusion: </strong>By combining GAN-based generative modeling with MCMC, the hybrid MCMC-GAN method extends a previously proposed IO approximation method to more general detection-estimation tasks. This provides a new capability to benchmark and optimize imaging-system performance through virtual imaging studies.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"12 5","pages":"051810"},"PeriodicalIF":1.7,"publicationDate":"2025-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12539792/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145349258","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Beyond the Victory Lap. 超越胜利圈。
IF 1.7 Q3 RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING Pub Date : 2025-09-01 Epub Date: 2025-10-24 DOI: 10.1117/1.JMI.12.5.050101
Bennett A Landman

The editorial introduces JMI Volume 12 Issue 5, as it marks the beginning of a new academic year with a celebration of innovation, mentorship, and the evolving role of scholarly publishing in medical imaging. It emphasizes that impactful research is not just about results, but about teaching, sharing insights, and advancing the field through curiosity, collaboration, and community-driven resources.

这篇社论介绍了JMI第12卷第5期,因为它标志着新学年的开始,庆祝创新、指导和学术出版在医学成像中的不断发展的作用。它强调,有影响力的研究不仅仅是关于结果,而是关于教学,分享见解,并通过好奇心,协作和社区驱动的资源推进该领域。
{"title":"Beyond the Victory Lap.","authors":"Bennett A Landman","doi":"10.1117/1.JMI.12.5.050101","DOIUrl":"https://doi.org/10.1117/1.JMI.12.5.050101","url":null,"abstract":"<p><p>The editorial introduces JMI Volume 12 Issue 5, as it marks the beginning of a new academic year with a celebration of innovation, mentorship, and the evolving role of scholarly publishing in medical imaging. It emphasizes that impactful research is not just about results, but about teaching, sharing insights, and advancing the field through curiosity, collaboration, and community-driven resources.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"12 5","pages":"050101"},"PeriodicalIF":1.7,"publicationDate":"2025-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12550604/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145379086","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
TFKT V2: task-focused knowledge transfer from natural images for computed tomography perceptual image quality assessment. TFKT V2:用于计算机断层扫描感知图像质量评估的以任务为中心的自然图像知识转移。
IF 1.9 Q3 RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING Pub Date : 2025-09-01 Epub Date: 2025-05-28 DOI: 10.1117/1.JMI.12.5.051805
Kazi Ramisa Rifa, Md Atik Ahamed, Jie Zhang, Abdullah Imran

Purpose: The accurate assessment of computed tomography (CT) image quality is crucial for ensuring diagnostic reliability while minimizing radiation dose. Radiologists' evaluations are time-consuming and labor-intensive. Existing automated approaches often require large CT datasets with predefined image quality assessment (IQA) scores, which often do not align well with clinical evaluations. We aim to develop a reference-free, automated method for CT IQA that closely reflects radiologists' evaluations, reducing the dependency on large annotated datasets.

Approach: We propose Task-Focused Knowledge Transfer (TFKT), a deep learning-based IQA method leveraging knowledge transfer from task-similar natural image datasets. TFKT incorporates a hybrid convolutional neural network-transformer model, enabling accurate quality predictions by learning from natural image distortions with human-annotated mean opinion scores. The model is pre-trained on natural image datasets and fine-tuned on low-dose computed tomography perceptual image quality assessment data to ensure task-specific adaptability.

Results: Extensive evaluations demonstrate that the proposed TFKT method effectively predicts IQA scores aligned with radiologists' assessments on in-domain datasets and generalizes well to out-of-domain clinical pediatric CT exams. The model achieves robust performance without requiring high-dose reference images. Our model is capable of assessing the quality of 30 CT image slices in a second.

Conclusions: The proposed TFKT approach provides a scalable, accurate, and reference-free solution for CT IQA. The model bridges the gap between traditional and deep learning-based IQA, offering clinically relevant and computationally efficient assessments applicable to real-world clinical settings.

目的:计算机断层扫描(CT)图像质量的准确评估是确保诊断可靠性,同时尽量减少辐射剂量的关键。放射科医生的评估既费时又费力。现有的自动化方法通常需要具有预定义图像质量评估(IQA)分数的大型CT数据集,这些数据通常与临床评估不太一致。我们的目标是开发一种无参考的、自动化的CT IQA方法,该方法密切反映放射科医生的评估,减少对大型注释数据集的依赖。方法:我们提出了以任务为中心的知识转移(TFKT),这是一种基于深度学习的IQA方法,利用任务相似的自然图像数据集的知识转移。TFKT结合了混合卷积神经网络变压器模型,通过学习自然图像失真和人工注释的平均意见得分,实现准确的质量预测。该模型在自然图像数据集上进行预训练,并在低剂量计算机断层扫描感知图像质量评估数据上进行微调,以确保任务特定的适应性。结果:广泛的评估表明,所提出的TFKT方法有效地预测了与放射科医生在域内数据集上的评估相一致的IQA分数,并很好地推广到域外的临床儿科CT检查。该模型在不需要高剂量参考图像的情况下实现了鲁棒性。我们的模型能够在一秒钟内评估约30个CT图像切片的质量。结论:提出的TFKT方法为CT IQA提供了一种可扩展、准确、无参考的解决方案。该模型弥合了传统和基于深度学习的IQA之间的差距,提供了适用于现实世界临床环境的临床相关和计算效率高的评估。
{"title":"TFKT V2: task-focused knowledge transfer from natural images for computed tomography perceptual image quality assessment.","authors":"Kazi Ramisa Rifa, Md Atik Ahamed, Jie Zhang, Abdullah Imran","doi":"10.1117/1.JMI.12.5.051805","DOIUrl":"10.1117/1.JMI.12.5.051805","url":null,"abstract":"<p><strong>Purpose: </strong>The accurate assessment of computed tomography (CT) image quality is crucial for ensuring diagnostic reliability while minimizing radiation dose. Radiologists' evaluations are time-consuming and labor-intensive. Existing automated approaches often require large CT datasets with predefined image quality assessment (IQA) scores, which often do not align well with clinical evaluations. We aim to develop a reference-free, automated method for CT IQA that closely reflects radiologists' evaluations, reducing the dependency on large annotated datasets.</p><p><strong>Approach: </strong>We propose Task-Focused Knowledge Transfer (TFKT), a deep learning-based IQA method leveraging knowledge transfer from task-similar natural image datasets. TFKT incorporates a hybrid convolutional neural network-transformer model, enabling accurate quality predictions by learning from natural image distortions with human-annotated mean opinion scores. The model is pre-trained on natural image datasets and fine-tuned on low-dose computed tomography perceptual image quality assessment data to ensure task-specific adaptability.</p><p><strong>Results: </strong>Extensive evaluations demonstrate that the proposed TFKT method effectively predicts IQA scores aligned with radiologists' assessments on in-domain datasets and generalizes well to out-of-domain clinical pediatric CT exams. The model achieves robust performance without requiring high-dose reference images. Our model is capable of assessing the quality of <math><mrow><mo>∼</mo> <mn>30</mn></mrow> </math> CT image slices in a second.</p><p><strong>Conclusions: </strong>The proposed TFKT approach provides a scalable, accurate, and reference-free solution for CT IQA. The model bridges the gap between traditional and deep learning-based IQA, offering clinically relevant and computationally efficient assessments applicable to real-world clinical settings.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"12 5","pages":"051805"},"PeriodicalIF":1.9,"publicationDate":"2025-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12116730/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144182165","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Full-head segmentation of MRI with abnormal brain anatomy: model and data release. 脑解剖异常的MRI全头分割:模型与数据发布。
IF 1.7 Q3 RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING Pub Date : 2025-09-01 Epub Date: 2025-09-17 DOI: 10.1117/1.JMI.12.5.054001
Andrew M Birnbaum, Adam Buchwald, Peter Turkeltaub, Adam Jacks, George Carr, Shreya Kannan, Yu Huang, Abhisheck Datta, Lucas C Parra, Lukas A Hirsch

Purpose: Our goal was to develop a deep network for whole-head segmentation, including clinical magnetic resonance imaging (MRI) with abnormal anatomy, and compile the first public benchmark dataset for this purpose. We collected 98 MRIs with volumetric segmentation labels for a diverse set of human subjects, including normal and abnormal anatomy in clinical cases of stroke and disorders of consciousness.

Approach: Training labels were generated by manually correcting initial automated segmentations for skin/scalp, skull, cerebro-spinal fluid, gray matter, white matter, air cavity, and extracephalic air. We developed a "MultiAxial" network consisting of three 2D U-Net that operate independently in sagittal, axial, and coronal planes, which are then combined to produce a single 3D segmentation.

Results: The MultiAxial network achieved a test-set Dice scores of 0.88 ± 0.04 (median ± interquartile range) on whole-head segmentation, including gray and white matter. This was compared with 0.86 ± 0.04 for Multipriors and 0.79 ± 0.10 for SPM12, two standard tools currently available for this task. The MultiAxial network gains in robustness by avoiding the need for coregistration with an atlas. It performed well in regions with abnormal anatomy and on images that have been de-identified. It enables more accurate and robust current flow modeling when incorporated into ROAST, a widely used modeling toolbox for transcranial electric stimulation.

Conclusions: We are releasing a new state-of-the-art tool for whole-head MRI segmentation in abnormal anatomy, along with the largest volume of labeled clinical head MRIs, including labels for nonbrain structures. Together, the model and data may serve as a benchmark for future efforts.

目的:我们的目标是开发一个用于全头部分割的深度网络,包括异常解剖的临床磁共振成像(MRI),并为此目的编制第一个公开的基准数据集。我们收集了98个具有体积分割标签的mri,用于不同的人类受试者,包括中风和意识障碍的临床病例中的正常和异常解剖。方法:通过手动校正皮肤/头皮、颅骨、脑脊液、灰质、白质、空腔和脑外空气的初始自动分割来生成训练标签。我们开发了一个“多轴”网络,由三个2D U-Net组成,它们分别在矢状面、轴状面和冠状面独立运行,然后将它们组合在一起产生一个单一的3D分割。结果:MultiAxial网络在包括灰质和白质在内的整个头部分割上的测试集Dice得分为0.88±0.04(中位数±四分位数范围)。相比之下,目前可用于该任务的两种标准工具Multipriors为0.86±0.04,SPM12为0.79±0.10。多轴网络通过避免与图谱的共配准而获得鲁棒性。它在解剖结构异常的区域和已经去识别的图像上表现良好。它可以更准确和强大的电流建模时,结合到ROAST,一个广泛使用的模型工具箱,经颅电刺激。结论:我们正在发布一种新的最先进的工具,用于异常解剖的全头部MRI分割,以及最大容量的标记临床头部MRI,包括非脑结构的标记。该模型和数据可以作为未来工作的基准。
{"title":"Full-head segmentation of MRI with abnormal brain anatomy: model and data release.","authors":"Andrew M Birnbaum, Adam Buchwald, Peter Turkeltaub, Adam Jacks, George Carr, Shreya Kannan, Yu Huang, Abhisheck Datta, Lucas C Parra, Lukas A Hirsch","doi":"10.1117/1.JMI.12.5.054001","DOIUrl":"10.1117/1.JMI.12.5.054001","url":null,"abstract":"<p><strong>Purpose: </strong>Our goal was to develop a deep network for whole-head segmentation, including clinical magnetic resonance imaging (MRI) with abnormal anatomy, and compile the first public benchmark dataset for this purpose. We collected 98 MRIs with volumetric segmentation labels for a diverse set of human subjects, including normal and abnormal anatomy in clinical cases of stroke and disorders of consciousness.</p><p><strong>Approach: </strong>Training labels were generated by manually correcting initial automated segmentations for skin/scalp, skull, cerebro-spinal fluid, gray matter, white matter, air cavity, and extracephalic air. We developed a \"MultiAxial\" network consisting of three 2D U-Net that operate independently in sagittal, axial, and coronal planes, which are then combined to produce a single 3D segmentation.</p><p><strong>Results: </strong>The MultiAxial network achieved a test-set Dice scores of <math><mrow><mn>0.88</mn> <mo>±</mo> <mn>0.04</mn></mrow> </math> (median ± interquartile range) on whole-head segmentation, including gray and white matter. This was compared with <math><mrow><mn>0.86</mn> <mo>±</mo> <mn>0.04</mn></mrow> </math> for Multipriors and <math><mrow><mn>0.79</mn> <mo>±</mo> <mn>0.10</mn></mrow> </math> for SPM12, two standard tools currently available for this task. The MultiAxial network gains in robustness by avoiding the need for coregistration with an atlas. It performed well in regions with abnormal anatomy and on images that have been de-identified. It enables more accurate and robust current flow modeling when incorporated into ROAST, a widely used modeling toolbox for transcranial electric stimulation.</p><p><strong>Conclusions: </strong>We are releasing a new state-of-the-art tool for whole-head MRI segmentation in abnormal anatomy, along with the largest volume of labeled clinical head MRIs, including labels for nonbrain structures. Together, the model and data may serve as a benchmark for future efforts.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"12 5","pages":"054001"},"PeriodicalIF":1.7,"publicationDate":"2025-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12442731/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145087827","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Joint CT reconstruction of anatomy and implants using a mixed prior model. 使用混合先验模型重建关节CT解剖和植入物。
IF 1.7 Q3 RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING Pub Date : 2025-09-01 Epub Date: 2025-10-18 DOI: 10.1117/1.JMI.12.5.053502
Xiao Jiang, Grace J Gang, J Webster Stayman

Purpose: Medical implants, often made of dense materials, pose significant challenges to accurate computed tomography (CT) reconstruction, especially near implants due to beam hardening and partial-volume artifacts. Moreover, diagnostics involving implants often require separate visualization for implants and anatomy. In this work, we propose a approach for joint estimation of anatomy and implants as separate volumes using a mixed prior model.

Approach: We leverage a learning-based prior for anatomy and a sparsity prior for implants to decouple the two volumes. In addition, a hybrid mono-polyenergetic forward model is employed to accommodate the spectral effects of implants, and a multiresolution object model is used to achieve high-resolution implant reconstruction. The reconstruction process alternates between diffusion posterior sampling for anatomy updates and classic optimization for implants and spectral coefficients.

Results: Evaluations were performed on emulated cardiac imaging with stent and spine imaging with pedicle screws. The structures of the cardiac stent with 0.25 mm wires were clearly visualized in the implant images, whereas the blooming artifacts around the stent were effectively suppressed in the anatomical reconstruction. For pedicle screws, the proposed algorithm mitigated streaking and beam-hardening artifacts in the anatomy volume, demonstrating significant improvements in SSIM and PSNR compared with frequency-splitting metal artifact reduction and model-based reconstruction on slices containing implants.

Conclusion: The proposed mixed prior model coupled with a hybrid spectral and multiresolution model can help to separate spatially and spectrally distinct objects that differ from anatomical features in single-energy CT, improving both image quality and separate visualization of implants and anatomy.

目的:医疗植入物通常由致密材料制成,对精确的计算机断层扫描(CT)重建构成重大挑战,特别是在植入物附近,由于束硬化和部分体积伪影。此外,涉及植入物的诊断通常需要对植入物和解剖结构进行单独的可视化。在这项工作中,我们提出了一种使用混合先验模型将解剖学和植入物作为单独体积进行联合估计的方法。方法:我们利用基于学习的解剖学先验和植入物的稀疏性先验来解耦两卷。此外,采用混合单多能正演模型来适应种植体的光谱效应,采用多分辨率目标模型来实现高分辨率的种植体重建。重建过程在解剖学更新的扩散后验采样和植入物和光谱系数的经典优化之间交替进行。结果:对支架模拟心脏成像和椎弓根螺钉模拟脊柱成像进行评价。0.25 mm金属丝心脏支架的结构在植入物图像上清晰可见,而在解剖重建中,支架周围的虚化伪影被有效抑制。对于椎弓根螺钉,所提出的算法减轻了解剖体积中的条纹和波束硬化伪影,与分频金属伪影减少和基于模型的含植入物切片重建相比,在SSIM和PSNR方面有了显著改善。结论:本文提出的混合先验模型与混合光谱和多分辨率模型相结合,有助于在单能量CT中分离出与解剖特征不同的空间和光谱上不同的物体,提高图像质量,并提高植入物和解剖的分离可视化。
{"title":"Joint CT reconstruction of anatomy and implants using a mixed prior model.","authors":"Xiao Jiang, Grace J Gang, J Webster Stayman","doi":"10.1117/1.JMI.12.5.053502","DOIUrl":"10.1117/1.JMI.12.5.053502","url":null,"abstract":"<p><strong>Purpose: </strong>Medical implants, often made of dense materials, pose significant challenges to accurate computed tomography (CT) reconstruction, especially near implants due to beam hardening and partial-volume artifacts. Moreover, diagnostics involving implants often require separate visualization for implants and anatomy. In this work, we propose a approach for joint estimation of anatomy and implants as separate volumes using a mixed prior model.</p><p><strong>Approach: </strong>We leverage a learning-based prior for anatomy and a sparsity prior for implants to decouple the two volumes. In addition, a hybrid mono-polyenergetic forward model is employed to accommodate the spectral effects of implants, and a multiresolution object model is used to achieve high-resolution implant reconstruction. The reconstruction process alternates between diffusion posterior sampling for anatomy updates and classic optimization for implants and spectral coefficients.</p><p><strong>Results: </strong>Evaluations were performed on emulated cardiac imaging with stent and spine imaging with pedicle screws. The structures of the cardiac stent with 0.25 mm wires were clearly visualized in the implant images, whereas the blooming artifacts around the stent were effectively suppressed in the anatomical reconstruction. For pedicle screws, the proposed algorithm mitigated streaking and beam-hardening artifacts in the anatomy volume, demonstrating significant improvements in SSIM and PSNR compared with frequency-splitting metal artifact reduction and model-based reconstruction on slices containing implants.</p><p><strong>Conclusion: </strong>The proposed mixed prior model coupled with a hybrid spectral and multiresolution model can help to separate spatially and spectrally distinct objects that differ from anatomical features in single-energy CT, improving both image quality and separate visualization of implants and anatomy.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"12 5","pages":"053502"},"PeriodicalIF":1.7,"publicationDate":"2025-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12537543/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145349286","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Segmentation variability and radiomics stability for predicting triple-negative breast cancer subtype using magnetic resonance imaging. 磁共振成像预测三阴性乳腺癌亚型的分割变异性和放射组学稳定性。
IF 1.7 Q3 RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING Pub Date : 2025-09-01 Epub Date: 2025-09-17 DOI: 10.1117/1.JMI.12.5.054501
Isabella Cama, Alejandro Guzmán, Cristina Campi, Michele Piana, Karim Lekadir, Sara Garbarino, Oliver Díaz

Purpose: Many studies caution against using radiomic features that are sensitive to contouring variability in predictive models for disease stratification. Consequently, metrics such as the intraclass correlation coefficient (ICC) are recommended to guide feature selection based on stability. However, the direct impact of segmentation variability on the performance of predictive models remains underexplored. We examine how segmentation variability affects both feature stability and predictive performance in the radiomics-based classification of triple-negative breast cancer (TNBC) using breast magnetic resonance imaging.

Approach: We analyzed 244 images from the Duke dataset, introducing segmentation variability through controlled modifications of manual segmentations. For each segmentation mask, explainable radiomic features were selected using Shapley Additive exPlanations and used to train logistic regression models. Feature stability across segmentations was assessed via ICC, Pearson's correlation, and reliability scores quantifying the relationship between segmentation variability and feature robustness.

Results: Model performances in predicting TNBC do not exhibit a significant difference across varying segmentations. The most explicative and predictive features exhibit decreasing ICC as segmentation accuracy decreases. However, their predictive power remains intact due to low ICC combined with high Pearson's correlation. No shared numerical relationship is found between feature stability and segmentation variability among the most predictive features.

Conclusions: Moderate segmentation variability has a limited impact on model performance. Although incorporating peritumoral information may reduce feature reproducibility, it does not compromise predictive utility. Notably, feature stability is not a strict prerequisite for predictive relevance, highlighting that exclusive reliance on ICC or stability metrics for feature selection may inadvertently discard informative features.

目的:许多研究警告不要在疾病分层预测模型中使用对轮廓变异性敏感的放射学特征。因此,建议使用诸如类内相关系数(ICC)之类的度量来指导基于稳定性的特征选择。然而,分割可变性对预测模型性能的直接影响仍未得到充分探讨。我们研究了在使用乳房磁共振成像的基于放射组学的三阴性乳腺癌(TNBC)分类中,分割变异性如何影响特征稳定性和预测性能。方法:我们分析了来自杜克大学数据集的244幅图像,通过对人工分割的可控修改引入了分割的可变性。对于每个分割掩模,使用Shapley加性解释选择可解释的放射学特征,并用于训练逻辑回归模型。通过ICC、Pearson相关和可靠性评分来评估分割变异性和特征鲁棒性之间的关系。结果:预测TNBC的模型性能在不同的分割中没有显着差异。随着分割精度的降低,最具解释性和预测性的特征呈现出降低的ICC。然而,由于低ICC结合高Pearson相关性,他们的预测能力仍然完好无损。在最具预测性的特征中,特征稳定性和分割可变性之间没有发现共享的数值关系。结论:中度分割可变性对模型性能的影响有限。虽然合并肿瘤周围信息可能会降低特征的再现性,但它不会损害预测效用。值得注意的是,特征稳定性并不是预测相关性的严格先决条件,强调在特征选择中完全依赖ICC或稳定性度量可能会无意中丢弃信息特征。
{"title":"Segmentation variability and radiomics stability for predicting triple-negative breast cancer subtype using magnetic resonance imaging.","authors":"Isabella Cama, Alejandro Guzmán, Cristina Campi, Michele Piana, Karim Lekadir, Sara Garbarino, Oliver Díaz","doi":"10.1117/1.JMI.12.5.054501","DOIUrl":"https://doi.org/10.1117/1.JMI.12.5.054501","url":null,"abstract":"<p><strong>Purpose: </strong>Many studies caution against using radiomic features that are sensitive to contouring variability in predictive models for disease stratification. Consequently, metrics such as the intraclass correlation coefficient (ICC) are recommended to guide feature selection based on stability. However, the direct impact of segmentation variability on the performance of predictive models remains underexplored. We examine how segmentation variability affects both feature stability and predictive performance in the radiomics-based classification of triple-negative breast cancer (TNBC) using breast magnetic resonance imaging.</p><p><strong>Approach: </strong>We analyzed 244 images from the Duke dataset, introducing segmentation variability through controlled modifications of manual segmentations. For each segmentation mask, explainable radiomic features were selected using Shapley Additive exPlanations and used to train logistic regression models. Feature stability across segmentations was assessed via ICC, Pearson's correlation, and reliability scores quantifying the relationship between segmentation variability and feature robustness.</p><p><strong>Results: </strong>Model performances in predicting TNBC do not exhibit a significant difference across varying segmentations. The most explicative and predictive features exhibit decreasing ICC as segmentation accuracy decreases. However, their predictive power remains intact due to low ICC combined with high Pearson's correlation. No shared numerical relationship is found between feature stability and segmentation variability among the most predictive features.</p><p><strong>Conclusions: </strong>Moderate segmentation variability has a limited impact on model performance. Although incorporating peritumoral information may reduce feature reproducibility, it does not compromise predictive utility. Notably, feature stability is not a strict prerequisite for predictive relevance, highlighting that exclusive reliance on ICC or stability metrics for feature selection may inadvertently discard informative features.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"12 5","pages":"054501"},"PeriodicalIF":1.7,"publicationDate":"2025-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12443385/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145087856","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Journal of Medical Imaging
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1