首页 > 最新文献

Journal of Medical Imaging最新文献

英文 中文
Simulating dynamic tumor contrast enhancement in breast MRI using conditional generative adversarial networks. 使用条件生成对抗网络模拟乳腺MRI动态肿瘤对比增强。
IF 1.9 Q3 RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING Pub Date : 2025-11-01 Epub Date: 2025-06-28 DOI: 10.1117/1.JMI.12.S2.S22014
Richard Osuala, Smriti Joshi, Apostolia Tsirikoglou, Lidia Garrucho, Walter H L Pinaya, Daniel M Lang, Julia A Schnabel, Oliver Diaz, Karim Lekadir

Purpose: Deep generative models and synthetic data generation have become essential for advancing computer-assisted diagnosis and treatment. We explore one such emerging and particularly promising application of deep generative models, namely, the generation of virtual contrast enhancement. This allows to predict and simulate contrast enhancement in breast magnetic resonance imaging (MRI) without physical contrast agent injection, thereby unlocking lesion localization and categorization even in patient populations where the lengthy, costly, and invasive process of physical contrast agent injection is contraindicated.

Approach: We define a framework for desirable properties of synthetic data, which leads us to propose the scaled aggregate measure (SAMe) consisting of a balanced set of scaled complementary metrics for generative model training and convergence evaluation. We further adopt a conditional generative adversarial network to translate from non-contrast-enhanced T 1 -weighted fat-saturated breast MRI slices to their dynamic contrast-enhanced (DCE) counterparts, thus learning to detect, localize, and adequately highlight breast cancer lesions. Next, we extend our model approach to jointly generate multiple DCE-MRI time points, enabling the simulation of contrast enhancement across temporal DCE-MRI acquisitions. In addition, three-dimensional U-Net tumor segmentation models are implemented and trained on combinations of synthetic and real DCE-MRI data to investigate the effect of data augmentation with synthetic DCE-MRI volumes.

Results: Conducting four main sets of experiments, (i) the variation across single metrics demonstrated the value of SAMe, and (ii) the quality and potential of virtual contrast injection for tumor detection and localization were shown. Segmentation models (iii) augmented with synthetic DCE-MRI data were more robust in the presence of domain shifts between pre-contrast and DCE-MRI domains. The joint synthesis approach of multi-sequence DCE-MRI (iv) resulted in temporally coherent synthetic DCE-MRI sequences and indicated the generative model's capability of learning complex contrast enhancement patterns.

Conclusions: Virtual contrast injection can result in accurate synthetic DCE-MRI images, potentially enhancing breast cancer diagnosis and treatment protocols. We demonstrate that detecting, localizing, and segmenting tumors using synthetic DCE-MRI is feasible and promising, particularly considering patients where contrast agent injection is risky or contraindicated. Jointly generating multiple subsequent DCE-MRI sequences can increase image quality and unlock clinical applications assessing tumor characteristics related to its response to contrast media injection as a pillar for personalized treatment planning.

目的:深度生成模型和合成数据生成已经成为推进计算机辅助诊断和治疗的关键。我们探索了一种新兴的、特别有前途的深度生成模型的应用,即生成虚拟对比度增强。这允许在不注射物理造影剂的情况下预测和模拟乳房磁共振成像(MRI)的对比增强,从而解锁病变定位和分类,即使在那些禁止注射物理造影剂的漫长、昂贵和侵入性过程的患者群体中。方法:我们为合成数据的理想属性定义了一个框架,这使我们提出了由一组平衡的缩放互补度量组成的缩放聚合度量(SAMe),用于生成模型训练和收敛性评估。我们进一步采用条件生成对抗网络将非对比增强t1加权饱和脂肪乳腺MRI切片转化为动态对比增强(DCE)切片,从而学习检测、定位和充分突出乳腺癌病变。接下来,我们扩展了我们的模型方法,共同生成多个DCE-MRI时间点,从而能够模拟跨时间DCE-MRI采集的对比度增强。此外,我们实现了三维U-Net肿瘤分割模型,并在合成和真实DCE-MRI数据的组合上进行了训练,以研究合成DCE-MRI体积增强数据的效果。结果:进行了四组主要实验,(i)单一指标之间的差异证明了SAMe的价值,(ii)显示了虚拟造影剂注射用于肿瘤检测和定位的质量和潜力。使用合成DCE-MRI数据增强的分割模型(iii)在对比前和DCE-MRI区域之间存在区域转移时更加稳健。多序列DCE-MRI的联合合成方法(iv)产生了时间连贯的合成DCE-MRI序列,并表明生成模型具有学习复杂对比度增强模式的能力。结论:虚拟造影剂注射可以获得准确的DCE-MRI合成图像,有可能增强乳腺癌的诊断和治疗方案。我们证明,使用合成DCE-MRI检测、定位和分割肿瘤是可行和有希望的,特别是考虑到注射造影剂有风险或有禁忌的患者。联合生成多个后续DCE-MRI序列可以提高图像质量,解锁临床应用,评估肿瘤对造影剂注射反应的相关特征,作为个性化治疗计划的支柱。
{"title":"Simulating dynamic tumor contrast enhancement in breast MRI using conditional generative adversarial networks.","authors":"Richard Osuala, Smriti Joshi, Apostolia Tsirikoglou, Lidia Garrucho, Walter H L Pinaya, Daniel M Lang, Julia A Schnabel, Oliver Diaz, Karim Lekadir","doi":"10.1117/1.JMI.12.S2.S22014","DOIUrl":"10.1117/1.JMI.12.S2.S22014","url":null,"abstract":"<p><strong>Purpose: </strong>Deep generative models and synthetic data generation have become essential for advancing computer-assisted diagnosis and treatment. We explore one such emerging and particularly promising application of deep generative models, namely, the generation of virtual contrast enhancement. This allows to predict and simulate contrast enhancement in breast magnetic resonance imaging (MRI) without physical contrast agent injection, thereby unlocking lesion localization and categorization even in patient populations where the lengthy, costly, and invasive process of physical contrast agent injection is contraindicated.</p><p><strong>Approach: </strong>We define a framework for desirable properties of synthetic data, which leads us to propose the scaled aggregate measure (SAMe) consisting of a balanced set of scaled complementary metrics for generative model training and convergence evaluation. We further adopt a conditional generative adversarial network to translate from non-contrast-enhanced <math><mrow><mi>T</mi> <mn>1</mn></mrow> </math> -weighted fat-saturated breast MRI slices to their dynamic contrast-enhanced (DCE) counterparts, thus learning to detect, localize, and adequately highlight breast cancer lesions. Next, we extend our model approach to jointly generate multiple DCE-MRI time points, enabling the simulation of contrast enhancement across temporal DCE-MRI acquisitions. In addition, three-dimensional U-Net tumor segmentation models are implemented and trained on combinations of synthetic and real DCE-MRI data to investigate the effect of data augmentation with synthetic DCE-MRI volumes.</p><p><strong>Results: </strong>Conducting four main sets of experiments, (i) the variation across single metrics demonstrated the value of SAMe, and (ii) the quality and potential of virtual contrast injection for tumor detection and localization were shown. Segmentation models (iii) augmented with synthetic DCE-MRI data were more robust in the presence of domain shifts between pre-contrast and DCE-MRI domains. The joint synthesis approach of multi-sequence DCE-MRI (iv) resulted in temporally coherent synthetic DCE-MRI sequences and indicated the generative model's capability of learning complex contrast enhancement patterns.</p><p><strong>Conclusions: </strong>Virtual contrast injection can result in accurate synthetic DCE-MRI images, potentially enhancing breast cancer diagnosis and treatment protocols. We demonstrate that detecting, localizing, and segmenting tumors using synthetic DCE-MRI is feasible and promising, particularly considering patients where contrast agent injection is risky or contraindicated. Jointly generating multiple subsequent DCE-MRI sequences can increase image quality and unlock clinical applications assessing tumor characteristics related to its response to contrast media injection as a pillar for personalized treatment planning.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"12 Suppl 2","pages":"S22014"},"PeriodicalIF":1.9,"publicationDate":"2025-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12205897/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144530364","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Introduction to the JMI Special Issue on Advances in Breast Imaging. 介绍JMI关于乳腺成像进展的特刊。
IF 1.7 Q3 RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING Pub Date : 2025-11-01 Epub Date: 2025-09-10 DOI: 10.1117/1.JMI.12.S2.S22001
Maryellen L Giger, Susan Astley Theodossiadis, Karen Drukker, Hui Li, Andrew D A Maidment, Heather M Whitney

The editorial introduces the JMI Special Issue on Advances in Breast Imaging, reflecting on the current forefront of breast imaging research.

这篇社论介绍了JMI关于乳腺成像进展的特刊,反映了当前乳腺成像研究的前沿。
{"title":"Introduction to the JMI Special Issue on Advances in Breast Imaging.","authors":"Maryellen L Giger, Susan Astley Theodossiadis, Karen Drukker, Hui Li, Andrew D A Maidment, Heather M Whitney","doi":"10.1117/1.JMI.12.S2.S22001","DOIUrl":"10.1117/1.JMI.12.S2.S22001","url":null,"abstract":"<p><p>The editorial introduces the JMI Special Issue on Advances in Breast Imaging, reflecting on the current forefront of breast imaging research.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"12 Suppl 2","pages":"S22001"},"PeriodicalIF":1.7,"publicationDate":"2025-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12422285/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145041826","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Introduction to the JMI Special Section on Computational Pathology. JMI计算病理学特别章节简介。
IF 1.7 Q3 RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING Pub Date : 2025-11-01 Epub Date: 2025-12-16 DOI: 10.1117/1.JMI.12.6.061401
Baowei Fei, Metin Nafi Gurcan, Yuankai Huo, Pinaki Sarder, Aaron Ward
{"title":"Introduction to the JMI Special Section on Computational Pathology.","authors":"Baowei Fei, Metin Nafi Gurcan, Yuankai Huo, Pinaki Sarder, Aaron Ward","doi":"10.1117/1.JMI.12.6.061401","DOIUrl":"https://doi.org/10.1117/1.JMI.12.6.061401","url":null,"abstract":"","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"12 6","pages":"061401"},"PeriodicalIF":1.7,"publicationDate":"2025-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12705466/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145776123","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
HID-CON: weakly supervised intrahepatic cholangiocarcinoma subtype classification of whole slide images using contrastive hidden class detection. HID-CON:弱监督肝内胆管癌亚型分类全幻灯片图像使用对比隐藏类检测。
IF 1.9 Q3 RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING Pub Date : 2025-11-01 Epub Date: 2025-03-12 DOI: 10.1117/1.JMI.12.6.061402
Jing Wei Tan, Kyoungbun Lee, Won-Ki Jeong

Purpose: Biliary tract cancer, also known as intrahepatic cholangiocarcinoma (IHCC), is a rare disease that shows no clear symptoms during its early stage, but its prognosis depends highly on the cancer subtype. Hence, an accurate cancer subtype classification model is necessary to provide better treatment plans to patients and to reduce mortality. However, annotating histopathology images at the pixel or patch level is time-consuming and labor-intensive for giga-pixel whole slide images. To address this problem, we propose a weakly supervised method for classifying IHCC subtypes using only image-level labels.

Approach: The core idea of the proposed method is to detect regions (i.e., subimages or patches) commonly included in all subtypes, which we name the "hidden class," and to remove them via iterative application of contrastive loss and label smoothing. Doing so will enable us to obtain only patches that faithfully represent each subtype, which are then used to train the image-level classification model by multiple instance learning (MIL).

Results: Our method outperforms the state-of-the-art weakly supervised learning methods ABMIL, TransMIL, and DTFD-MIL by 17 % , 18%, and 8%, respectively, and achieves performance comparable to that of supervised methods.

Conclusions: The introduction of a hidden class to represent patches commonly found across all subtypes enhances the accuracy of IHCC classification and addresses the weak labeling problem in histopathology images.

目的:胆道癌又称肝内胆管癌(IHCC),是一种早期无明显症状的罕见疾病,但其预后与肿瘤亚型有很大关系。因此,准确的癌症亚型分类模型是为患者提供更好的治疗方案和降低死亡率所必需的。然而,在像素或斑块水平上注释组织病理学图像对于千兆像素的整张幻灯片图像是耗时且费力的。为了解决这个问题,我们提出了一种弱监督方法,仅使用图像级标签对IHCC亚型进行分类。方法:提出的方法的核心思想是检测通常包含在所有子类型中的区域(即子图像或补丁),我们将其命名为“隐藏类”,并通过对比损失和标签平滑的迭代应用来去除它们。这样做将使我们能够只获得忠实地表示每个子类型的补丁,然后使用这些补丁通过多实例学习(MIL)来训练图像级分类模型。结果:我们的方法比最先进的弱监督学习方法ABMIL, TransMIL和DTFD-MIL分别高出约17%,18%和8%,并且实现了与监督方法相当的性能。结论:引入一个隐藏类来表示所有亚型中常见的斑块,提高了IHCC分类的准确性,并解决了组织病理学图像中标记薄弱的问题。
{"title":"HID-CON: weakly supervised intrahepatic cholangiocarcinoma subtype classification of whole slide images using contrastive hidden class detection.","authors":"Jing Wei Tan, Kyoungbun Lee, Won-Ki Jeong","doi":"10.1117/1.JMI.12.6.061402","DOIUrl":"10.1117/1.JMI.12.6.061402","url":null,"abstract":"<p><strong>Purpose: </strong>Biliary tract cancer, also known as intrahepatic cholangiocarcinoma (IHCC), is a rare disease that shows no clear symptoms during its early stage, but its prognosis depends highly on the cancer subtype. Hence, an accurate cancer subtype classification model is necessary to provide better treatment plans to patients and to reduce mortality. However, annotating histopathology images at the pixel or patch level is time-consuming and labor-intensive for giga-pixel whole slide images. To address this problem, we propose a weakly supervised method for classifying IHCC subtypes using only image-level labels.</p><p><strong>Approach: </strong>The core idea of the proposed method is to detect regions (i.e., subimages or patches) commonly included in all subtypes, which we name the \"hidden class,\" and to remove them via iterative application of contrastive loss and label smoothing. Doing so will enable us to obtain only patches that faithfully represent each subtype, which are then used to train the image-level classification model by multiple instance learning (MIL).</p><p><strong>Results: </strong>Our method outperforms the state-of-the-art weakly supervised learning methods ABMIL, TransMIL, and DTFD-MIL by <math><mrow><mo>∼</mo> <mn>17</mn> <mo>%</mo></mrow> </math> , 18%, and 8%, respectively, and achieves performance comparable to that of supervised methods.</p><p><strong>Conclusions: </strong>The introduction of a hidden class to represent patches commonly found across all subtypes enhances the accuracy of IHCC classification and addresses the weak labeling problem in histopathology images.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"12 6","pages":"061402"},"PeriodicalIF":1.9,"publicationDate":"2025-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11898109/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143626473","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Asymmetric scatter kernel estimation neural network for digital breast tomosynthesis. 数字乳房断层合成的非对称散射核估计神经网络。
IF 1.9 Q3 RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING Pub Date : 2025-11-01 Epub Date: 2025-06-12 DOI: 10.1117/1.JMI.12.S2.S22008
Subong Hyun, Seoyoung Lee, Ilwong Choi, Choul Woo Shin, Seungryong Cho

Purpose: Various deep learning (DL) approaches have been developed for estimating scatter radiation in digital breast tomosynthesis (DBT). Existing DL methods generally employ an end-to-end training approach, overlooking the underlying physics of scatter formation. We propose a deep learning approach inspired by asymmetric scatter kernel superposition to estimate scatter in DBT.

Approach: We use the network to generate the scatter amplitude distribution as well as the scatter kernel width and asymmetric factor map. To account for variations in local breast thickness and shape in DBT projection data, we integrated the Euclidean distance map and projection angle information into the network design for estimating the asymmetric factor.

Results: Systematic experiments on numerical phantom data and physical experimental data demonstrated the outperformance of the proposed approach to UNet-based end-to-end scatter estimation and symmetric kernel-based approaches in terms of signal-to-noise ratio and structure similarity index measure of the resulting scatter corrected images.

Conclusions: The proposed method is believed to have achieved significant advancement in scatter estimation of DBT projections, allowing a robust and reliable physics-informed scatter correction.

目的:各种深度学习(DL)方法被开发用于估计数字乳房断层合成(DBT)中的散射辐射。现有的深度学习方法通常采用端到端训练方法,忽略了散射形成的底层物理。我们提出了一种受非对称散射核叠加启发的深度学习方法来估计DBT中的散射。方法:利用网络生成散点振幅分布、散点核宽度和不对称因子图。为了考虑DBT投影数据中局部乳房厚度和形状的变化,我们将欧几里得距离图和投影角度信息整合到网络设计中,以估计不对称因素。结果:在数值模拟数据和物理实验数据上进行的系统实验表明,本文提出的方法在散射校正图像的信噪比和结构相似指数度量方面优于基于unet的端到端散射估计方法和基于对称核的方法。结论:所提出的方法被认为在DBT投影的散点估计方面取得了重大进展,允许进行稳健可靠的物理信息散点校正。
{"title":"Asymmetric scatter kernel estimation neural network for digital breast tomosynthesis.","authors":"Subong Hyun, Seoyoung Lee, Ilwong Choi, Choul Woo Shin, Seungryong Cho","doi":"10.1117/1.JMI.12.S2.S22008","DOIUrl":"10.1117/1.JMI.12.S2.S22008","url":null,"abstract":"<p><strong>Purpose: </strong>Various deep learning (DL) approaches have been developed for estimating scatter radiation in digital breast tomosynthesis (DBT). Existing DL methods generally employ an end-to-end training approach, overlooking the underlying physics of scatter formation. We propose a deep learning approach inspired by asymmetric scatter kernel superposition to estimate scatter in DBT.</p><p><strong>Approach: </strong>We use the network to generate the scatter amplitude distribution as well as the scatter kernel width and asymmetric factor map. To account for variations in local breast thickness and shape in DBT projection data, we integrated the Euclidean distance map and projection angle information into the network design for estimating the asymmetric factor.</p><p><strong>Results: </strong>Systematic experiments on numerical phantom data and physical experimental data demonstrated the outperformance of the proposed approach to UNet-based end-to-end scatter estimation and symmetric kernel-based approaches in terms of signal-to-noise ratio and structure similarity index measure of the resulting scatter corrected images.</p><p><strong>Conclusions: </strong>The proposed method is believed to have achieved significant advancement in scatter estimation of DBT projections, allowing a robust and reliable physics-informed scatter correction.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"12 Suppl 2","pages":"S22008"},"PeriodicalIF":1.9,"publicationDate":"2025-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12162176/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144303312","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Artificial intelligence in medical imaging diagnosis: are we ready for its clinical implementation? 人工智能在医学影像诊断中的应用:我们准备好临床应用了吗?
IF 1.9 Q3 RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING Pub Date : 2025-11-01 Epub Date: 2025-06-19 DOI: 10.1117/1.JMI.12.6.061405
Oscar Ramos-Soto, Itzel Aranguren, Manuel Carrillo M, Diego Oliva, Sandra E Balderas-Mata

Purpose: We examine the transformative potential of artificial intelligence (AI) in medical imaging diagnosis, focusing on improving diagnostic accuracy and efficiency through advanced algorithms. It addresses the significant challenges preventing immediate clinical adoption of AI, specifically from technical, ethical, and legal perspectives. The aim is to highlight the current state of AI in medical imaging and outline the necessary steps to ensure safe, effective, and ethically sound clinical implementation.

Approach: We conduct a comprehensive discussion, with special emphasis on the technical requirements for robust AI models, the ethical frameworks needed for responsible deployment, and the legal implications, including data privacy and regulatory compliance. Explainable artificial intelligence (XAI) is examined as a means to increase transparency and build trust among healthcare professionals and patients.

Results: The analysis reveals key challenges to AI integration in clinical settings, including the need for extensive high-quality datasets, model reliability, advanced infrastructure, and compliance with regulatory standards. The lack of explainability in AI outputs remains a barrier, with XAI identified as crucial for meeting transparency standards and enhancing trust among end users.

Conclusions: Overcoming these barriers requires a collaborative, multidisciplinary approach to integrate AI into clinical practice responsibly. Addressing technical, ethical, and legal issues will support a softer transition, fostering a more accurate, efficient, and patient-centered healthcare system where AI augments traditional medical practices.

目的:研究人工智能(AI)在医学影像诊断中的变革潜力,重点是通过先进的算法提高诊断的准确性和效率。它解决了阻碍人工智能立即临床应用的重大挑战,特别是从技术、伦理和法律角度。其目的是强调人工智能在医学成像中的现状,并概述确保安全、有效和合乎伦理的临床实施的必要步骤。方法:我们进行了全面的讨论,特别强调强大的人工智能模型的技术要求,负责任的部署所需的道德框架,以及法律影响,包括数据隐私和监管合规。可解释的人工智能(XAI)作为提高透明度和在医疗保健专业人员和患者之间建立信任的一种手段进行了研究。结果:分析揭示了人工智能在临床环境中集成的主要挑战,包括需要广泛的高质量数据集、模型可靠性、先进的基础设施和遵守监管标准。人工智能输出缺乏可解释性仍然是一个障碍,XAI被认为对满足透明度标准和增强最终用户之间的信任至关重要。结论:克服这些障碍需要一种协作的、多学科的方法来负责任地将人工智能整合到临床实践中。解决技术、伦理和法律问题将支持更柔和的过渡,促进更准确、更高效、更以患者为中心的医疗保健系统,人工智能将增强传统医疗实践。
{"title":"Artificial intelligence in medical imaging diagnosis: are we ready for its clinical implementation?","authors":"Oscar Ramos-Soto, Itzel Aranguren, Manuel Carrillo M, Diego Oliva, Sandra E Balderas-Mata","doi":"10.1117/1.JMI.12.6.061405","DOIUrl":"10.1117/1.JMI.12.6.061405","url":null,"abstract":"<p><strong>Purpose: </strong>We examine the transformative potential of artificial intelligence (AI) in medical imaging diagnosis, focusing on improving diagnostic accuracy and efficiency through advanced algorithms. It addresses the significant challenges preventing immediate clinical adoption of AI, specifically from technical, ethical, and legal perspectives. The aim is to highlight the current state of AI in medical imaging and outline the necessary steps to ensure safe, effective, and ethically sound clinical implementation.</p><p><strong>Approach: </strong>We conduct a comprehensive discussion, with special emphasis on the technical requirements for robust AI models, the ethical frameworks needed for responsible deployment, and the legal implications, including data privacy and regulatory compliance. Explainable artificial intelligence (XAI) is examined as a means to increase transparency and build trust among healthcare professionals and patients.</p><p><strong>Results: </strong>The analysis reveals key challenges to AI integration in clinical settings, including the need for extensive high-quality datasets, model reliability, advanced infrastructure, and compliance with regulatory standards. The lack of explainability in AI outputs remains a barrier, with XAI identified as crucial for meeting transparency standards and enhancing trust among end users.</p><p><strong>Conclusions: </strong>Overcoming these barriers requires a collaborative, multidisciplinary approach to integrate AI into clinical practice responsibly. Addressing technical, ethical, and legal issues will support a softer transition, fostering a more accurate, efficient, and patient-centered healthcare system where AI augments traditional medical practices.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"12 6","pages":"061405"},"PeriodicalIF":1.9,"publicationDate":"2025-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12177575/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144477323","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Self-supervision enhances instance-based multiple instance learning methods in digital pathology: a benchmark study. 自我监督增强基于实例的多实例学习方法在数字病理学:一个基准研究。
IF 1.9 Q3 RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING Pub Date : 2025-11-01 Epub Date: 2025-06-03 DOI: 10.1117/1.JMI.12.6.061404
Ali Mammadov, Loïc Le Folgoc, Julien Adam, Anne Buronfosse, Gilles Hayem, Guillaume Hocquet, Pietro Gori

Purpose: Multiple instance learning (MIL) has emerged as the best solution for whole slide image (WSI) classification. It consists of dividing each slide into patches, which are treated as a bag of instances labeled with a global label. MIL includes two main approaches: instance-based and embedding-based. In the former, each patch is classified independently, and then, the patch scores are aggregated to predict the bag label. In the latter, bag classification is performed after aggregating patch embeddings. Even if instance-based methods are naturally more interpretable, embedding-based MILs have usually been preferred in the past due to their robustness to poor feature extractors. Recently, the quality of feature embeddings has drastically increased using self-supervised learning (SSL). Nevertheless, many authors continue to endorse the superiority of embedding-based MIL.

Approach: We conduct 710 experiments across 4 datasets, comparing 10 MIL strategies, 6 self-supervised methods with 4 backbones, 4 foundation models, and various pathology-adapted techniques. Furthermore, we introduce 4 instance-based MIL methods, never used before in the pathology domain.

Results: We show that with a good SSL feature extractor, simple instance-based MILs, with very few parameters, obtain similar or better performance than complex, state-of-the-art (SOTA) embedding-based MIL methods, setting new SOTA results on the BRACS and Camelyon16 datasets.

Conclusion: As simple instance-based MIL methods are naturally more interpretable and explainable to clinicians, our results suggest that more effort should be put into well-adapted SSL methods for WSI rather than into complex embedding-based MIL methods.

目的:多实例学习(MIL)已成为全幻灯片图像(WSI)分类的最佳方法。它包括将每张幻灯片划分为补丁,这些补丁被视为带有全局标签的实例包。MIL包括两种主要方法:基于实例的和基于嵌入的。前一种方法是对每个贴片进行独立分类,然后将贴片得分进行汇总来预测包装袋标签。在后者中,袋分类是在聚集补丁嵌入后进行的。即使基于实例的方法自然地更具可解释性,基于嵌入的mil在过去通常是首选的,因为它们对较差的特征提取器具有鲁棒性。最近,使用自监督学习(self-supervised learning, SSL),特征嵌入的质量得到了极大的提高。方法:我们在4个数据集上进行了710个实验,比较了10种MIL策略、6种具有4个主干的自监督方法、4个基础模型和各种病理适应技术。此外,我们介绍了4种基于实例的MIL方法,这些方法以前从未在病理学领域使用过。结果:我们表明,使用良好的SSL特征提取器,简单的基于实例的MIL,使用很少的参数,获得与复杂的,最先进的(SOTA)基于嵌入的MIL方法相似或更好的性能,在BRACS和Camelyon16数据集上设置新的SOTA结果。结论:由于简单的基于实例的MIL方法对临床医生来说自然更具可解释性和可解释性,我们的研究结果表明,应该更多地努力开发适合WSI的SSL方法,而不是复杂的基于嵌入的MIL方法。
{"title":"Self-supervision enhances instance-based multiple instance learning methods in digital pathology: a benchmark study.","authors":"Ali Mammadov, Loïc Le Folgoc, Julien Adam, Anne Buronfosse, Gilles Hayem, Guillaume Hocquet, Pietro Gori","doi":"10.1117/1.JMI.12.6.061404","DOIUrl":"10.1117/1.JMI.12.6.061404","url":null,"abstract":"<p><strong>Purpose: </strong>Multiple instance learning (MIL) has emerged as the best solution for whole slide image (WSI) classification. It consists of dividing each slide into patches, which are treated as a bag of instances labeled with a global label. MIL includes two main approaches: instance-based and embedding-based. In the former, each patch is classified independently, and then, the patch scores are aggregated to predict the bag label. In the latter, bag classification is performed after aggregating patch embeddings. Even if instance-based methods are naturally more interpretable, embedding-based MILs have usually been preferred in the past due to their robustness to poor feature extractors. Recently, the quality of feature embeddings has drastically increased using self-supervised learning (SSL). Nevertheless, many authors continue to endorse the superiority of embedding-based MIL.</p><p><strong>Approach: </strong>We conduct 710 experiments across 4 datasets, comparing 10 MIL strategies, 6 self-supervised methods with 4 backbones, 4 foundation models, and various pathology-adapted techniques. Furthermore, we introduce 4 instance-based MIL methods, never used before in the pathology domain.</p><p><strong>Results: </strong>We show that with a good SSL feature extractor, simple instance-based MILs, with very few parameters, obtain similar or better performance than complex, state-of-the-art (SOTA) embedding-based MIL methods, setting new SOTA results on the BRACS and Camelyon16 datasets.</p><p><strong>Conclusion: </strong>As simple instance-based MIL methods are naturally more interpretable and explainable to clinicians, our results suggest that more effort should be put into well-adapted SSL methods for WSI rather than into complex embedding-based MIL methods.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"12 6","pages":"061404"},"PeriodicalIF":1.9,"publicationDate":"2025-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12134610/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144235588","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Cross-modality 3D MRI synthesis via cycle-guided denoising diffusion probability model. 基于循环引导去噪扩散概率模型的交叉模态三维MRI合成。
IF 1.7 Q3 RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING Pub Date : 2025-11-01 Epub Date: 2025-11-24 DOI: 10.1117/1.JMI.12.6.064003
Mingzhe Hu, Shaoyan Pan, Chih-Wei Chang, Richard L J Qiu, Junbo Peng, Tonghe Wang, Justin Roper, Hui Mao, David Yu, Xiaofeng Yang

Purpose: We propose a deep learning framework, the cycle-guided denoising diffusion probability model (CG-DDPM), for cross-modality magnetic resonance imaging (MRI) synthesis. The CG-DDPM aims to generate high-quality MRIs of a target modality from an existing modality, addressing the challenge of missing MRI sequences in clinical practice.

Approach: The CG-DDPM employs two interconnected conditional diffusion probabilistic models, with a cycle-guided reverse latent noise regularization to enhance synthesis consistency and anatomical fidelity. The framework was evaluated using the BraTS2020 dataset, which includes three-dimensional brain MRIs with T 1 -weighted, T 2 -weighted, and FLAIR modalities. The synthetic images were quantitatively assessed using metrics such as multi-scale structural similarity measure (MSSIM), peak signal-to-noise ratio (PSNR), and mean absolute error (MAE). The CG-DDPM was benchmarked against state-of-the-art methods, including IDDPM, IDDIM, and MRI-cGAN.

Results: The CG-DDPM demonstrated superior performance across all cross-modality synthesis tasks (T1 → T2, T2 → T1, T1 → FLAIR, and FLAIR → T1). It consistently achieved the highest MSSIM values (ranging from 0.966 to 0.971), the lowest MAE (0.011 to 0.013), and competitive PSNR values (27.7 to 28.8 dB). Across all tasks, CG-DDPM outperformed IDDPM, IDDIM, and MRI-cGAN in most metrics and exhibited significantly lower uncertainty and inconsistency in MC-based sampling. Statistical analyses confirmed the robustness of CG-DDPM, with p - values < 0.05 in key comparisons.

Conclusions: The proposed CG-DDPM provides a robust and efficient solution for cross-modality MRI synthesis, offering improved accuracy, stability, and clinical applicability compared with existing methods. This approach has the potential to streamline MRI-based workflows, enhance diagnostic imaging, and support precision treatment planning in medical physics and radiation oncology.

目的:我们提出了一个深度学习框架,即循环引导去噪扩散概率模型(CG-DDPM),用于交叉模态磁共振成像(MRI)合成。CG-DDPM旨在从现有模态生成目标模态的高质量MRI,解决临床实践中缺失MRI序列的挑战。方法:CG-DDPM采用两个相互连接的条件扩散概率模型,并采用周期引导的反向潜在噪声正则化来增强合成一致性和解剖保真度。该框架使用BraTS2020数据集进行评估,该数据集包括t1加权、t2加权和FLAIR模式的三维脑mri。采用多尺度结构相似性测量(MSSIM)、峰值信噪比(PSNR)和平均绝对误差(MAE)等指标对合成图像进行定量评估。CG-DDPM以最先进的方法为基准,包括IDDPM, IDDIM和MRI-cGAN。结果:CG-DDPM在所有跨模态合成任务(T1→T2、T2→T1、T1→FLAIR和FLAIR→T1)中表现优异。MSSIM最高(0.966 ~ 0.971),MAE最低(0.011 ~ 0.013),竞争PSNR最高(27.7 ~ 28.8 dB)。在所有任务中,CG-DDPM在大多数指标上优于IDDPM, IDDIM和MRI-cGAN,并且在基于mc的采样中表现出显着降低的不确定性和不一致性。统计分析证实了CG-DDPM的稳健性,关键比较的p值为0.05。结论:与现有方法相比,所提出的CG-DDPM为跨模态MRI合成提供了一种稳健、高效的解决方案,具有更高的准确性、稳定性和临床适用性。这种方法有可能简化基于核磁共振成像的工作流程,增强诊断成像,并支持医学物理和放射肿瘤学的精确治疗计划。
{"title":"Cross-modality 3D MRI synthesis via cycle-guided denoising diffusion probability model.","authors":"Mingzhe Hu, Shaoyan Pan, Chih-Wei Chang, Richard L J Qiu, Junbo Peng, Tonghe Wang, Justin Roper, Hui Mao, David Yu, Xiaofeng Yang","doi":"10.1117/1.JMI.12.6.064003","DOIUrl":"10.1117/1.JMI.12.6.064003","url":null,"abstract":"<p><strong>Purpose: </strong>We propose a deep learning framework, the cycle-guided denoising diffusion probability model (CG-DDPM), for cross-modality magnetic resonance imaging (MRI) synthesis. The CG-DDPM aims to generate high-quality MRIs of a target modality from an existing modality, addressing the challenge of missing MRI sequences in clinical practice.</p><p><strong>Approach: </strong>The CG-DDPM employs two interconnected conditional diffusion probabilistic models, with a cycle-guided reverse latent noise regularization to enhance synthesis consistency and anatomical fidelity. The framework was evaluated using the BraTS2020 dataset, which includes three-dimensional brain MRIs with <math><mrow><mi>T</mi> <mn>1</mn></mrow> </math> -weighted, <math><mrow><mi>T</mi> <mn>2</mn></mrow> </math> -weighted, and FLAIR modalities. The synthetic images were quantitatively assessed using metrics such as multi-scale structural similarity measure (MSSIM), peak signal-to-noise ratio (PSNR), and mean absolute error (MAE). The CG-DDPM was benchmarked against state-of-the-art methods, including IDDPM, IDDIM, and MRI-cGAN.</p><p><strong>Results: </strong>The CG-DDPM demonstrated superior performance across all cross-modality synthesis tasks (T1 → T2, T2 → T1, T1 → FLAIR, and FLAIR → T1). It consistently achieved the highest MSSIM values (ranging from 0.966 to 0.971), the lowest MAE (0.011 to 0.013), and competitive PSNR values (27.7 to 28.8 dB). Across all tasks, CG-DDPM outperformed IDDPM, IDDIM, and MRI-cGAN in most metrics and exhibited significantly lower uncertainty and inconsistency in MC-based sampling. Statistical analyses confirmed the robustness of CG-DDPM, with <math><mrow><mi>p</mi> <mrow><mtext>-</mtext></mrow> <mrow><mtext>values</mtext></mrow> <mo><</mo> <mn>0.05</mn></mrow> </math> in key comparisons.</p><p><strong>Conclusions: </strong>The proposed CG-DDPM provides a robust and efficient solution for cross-modality MRI synthesis, offering improved accuracy, stability, and clinical applicability compared with existing methods. This approach has the potential to streamline MRI-based workflows, enhance diagnostic imaging, and support precision treatment planning in medical physics and radiation oncology.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"12 6","pages":"064003"},"PeriodicalIF":1.7,"publicationDate":"2025-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12643384/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145606948","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Data-driven abdominal phenotypes of type 2 diabetes in lean, overweight, and obese cohorts from computed tomography. 来自计算机断层扫描的数据驱动的2型糖尿病在瘦、超重和肥胖人群中的腹部表型。
IF 1.7 Q3 RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING Pub Date : 2025-11-01 Epub Date: 2025-12-18 DOI: 10.1117/1.JMI.12.6.064006
Lucas W Remedios, Chloe Cho, Trent M Schwartz, Dingjie Su, Gaurav Rudravaram, Chenyu Gao, Aravind R Krishnan, Adam M Saunders, Michael E Kim, Shunxing Bao, Alvin C Powers, Bennett A Landman, John Virostko
<p><strong>Purpose: </strong>Although elevated body mass index (BMI) is a well-known risk factor for type 2 diabetes, the disease's presence in some lean adults and absence in others with obesity suggests that more detailed measurements of body composition may uncover abdominal phenotypes of type 2 diabetes. With artificial intelligence (AI) and computed tomography (CT), we can now leverage robust image segmentation to extract detailed measurements of size, shape, and tissue composition from abdominal organs, abdominal muscle, and abdominal fat depots in 3D clinical imaging at scale. This creates an opportunity to empirically define body composition signatures linked to type 2 diabetes risk and protection using large-scale clinical data.</p><p><strong>Approach: </strong>We studied imaging records of 1728 de-identified patients from Vanderbilt University Medical Center with BMI collected from the electronic health record. To uncover BMI-specific diabetic abdominal patterns from clinical CT, we applied our design four times: once on the full cohort ( <math><mrow><mi>n</mi> <mo>=</mo> <mn>1728</mn></mrow> </math> ) and once on lean ( <math><mrow><mi>n</mi> <mo>=</mo> <mn>497</mn></mrow> </math> ), overweight ( <math><mrow><mi>n</mi> <mo>=</mo> <mn>611</mn></mrow> </math> ), and obese ( <math><mrow><mi>n</mi> <mo>=</mo> <mn>620</mn></mrow> </math> ) subgroups separately. Briefly, our experimental design transforms abdominal scans into collections of explainable measurements, identifies which measurements most strongly predict type 2 diabetes and how they contribute to risk or protection, groups scans by shared model decision patterns, and links those decision patterns back to interpretable abdominal phenotypes in the original explainable measurement space of the abdomen using the following steps. (1) To capture abdominal composition: we represented each scan as a collection of 88 automatically extracted measurements of the size, shape, and fat content of abdominal structures using TotalSegmentator. (2) To learn key predictors: we trained a 10-fold cross-validated random forest classifier with SHapley Additive exPlanations (SHAP) analysis to rank features and estimate their risk-versus-protective effects for type 2 diabetes. (3) To validate individual effects: for the 20 highest-ranked features, we ran univariate logistic regressions to quantify their independent associations with type 2 diabetes. (4) To identify decision-making patterns: we embedded the top-20 SHAP profiles with uniform manifold approximation and projection and applied silhouette-guided K-means to cluster the random forest's decision space. (5) To link decisions to abdominal phenotypes: we fit one-versus-rest classifiers on the original anatomical measurements from each decision cluster and applied a second SHAP analysis to explore whether the random forest's logic had identified abdominal phenotypes.</p><p><strong>Results: </strong>Across the full, lean, overweight, and obese cohort
目的:虽然身体质量指数(BMI)升高是2型糖尿病的一个众所周知的危险因素,但这种疾病在一些瘦弱的成年人中存在,而在其他肥胖的成年人中没有,这表明更详细的身体成分测量可能揭示2型糖尿病的腹部表型。借助人工智能(AI)和计算机断层扫描(CT),我们现在可以利用鲁棒图像分割,在大规模的3D临床成像中从腹部器官、腹部肌肉和腹部脂肪库中提取尺寸、形状和组织组成的详细测量值。这创造了一个机会,通过大规模临床数据来经验地定义与2型糖尿病风险和保护相关的身体成分特征。方法:我们研究了范德比尔特大学医学中心1728例从电子健康记录中收集的BMI的未识别患者的影像学记录。为了从临床CT中揭示bmi特异性糖尿病腹部模式,我们将我们的设计应用了四次:一次是在全队列(n = 1728),一次是在瘦(n = 497)、超重(n = 611)和肥胖(n = 620)亚组。简而言之,我们的实验设计将腹部扫描转化为可解释的测量集合,确定哪些测量最能预测2型糖尿病以及它们如何促进风险或保护,通过共享模型决策模式对扫描进行分组,并使用以下步骤将这些决策模式链接回可解释的腹部表型在腹部的原始可解释测量空间中。(1)捕获腹部组成:我们使用TotalSegmentator将每次扫描表示为88个自动提取的腹部结构的大小、形状和脂肪含量测量值的集合。(2)学习关键预测因子:我们用SHapley加性解释(SHAP)分析训练了一个10倍交叉验证的随机森林分类器,对特征进行排序,并估计它们对2型糖尿病的风险与保护作用。(3)为了验证个体效应:对于排名最高的20个特征,我们进行了单变量逻辑回归,以量化它们与2型糖尿病的独立关联。(4)识别决策模式:我们用均匀流形逼近和投影嵌入前20个SHAP轮廓,并应用轮廓引导的K-means对随机森林的决策空间进行聚类。(5)为了将决策与腹部表型联系起来:我们对每个决策簇的原始解剖测量值进行了one- vs -rest分类器的拟合,并应用第二次SHAP分析来探索随机森林的逻辑是否识别了腹部表型。结果:在肥胖、瘦弱、超重和肥胖队列中,随机森林分类器在接受者工作特征曲线(AUC)下的平均面积为0.72至0.74。SHAP强调了每组中共同的2型糖尿病特征-脂肪骨骼肌,年龄较大,内脏和皮下脂肪较多,胰腺较小或脂肪含量高。单变量逻辑回归证实了每个亚组中前20个预测因子中的14 ~ 18个的方向(p < 0.05)。聚类模型的决策空间进一步揭示了2型糖尿病富集的腹部表型在瘦,超重和肥胖亚组。结论:我们发现,在不同的瘦、超重和肥胖组中,2型糖尿病的腹部特征相似,这表明2型糖尿病的腹部驱动因素可能在不同体重组别中是一致的。虽然我们的模型有一个适度的AUC,但可解释的组件允许对特征重要性进行清晰的解释。此外,在瘦和肥胖亚组中,识别2型糖尿病最重要的特征是脂肪骨骼肌。
{"title":"Data-driven abdominal phenotypes of type 2 diabetes in lean, overweight, and obese cohorts from computed tomography.","authors":"Lucas W Remedios, Chloe Cho, Trent M Schwartz, Dingjie Su, Gaurav Rudravaram, Chenyu Gao, Aravind R Krishnan, Adam M Saunders, Michael E Kim, Shunxing Bao, Alvin C Powers, Bennett A Landman, John Virostko","doi":"10.1117/1.JMI.12.6.064006","DOIUrl":"10.1117/1.JMI.12.6.064006","url":null,"abstract":"&lt;p&gt;&lt;strong&gt;Purpose: &lt;/strong&gt;Although elevated body mass index (BMI) is a well-known risk factor for type 2 diabetes, the disease's presence in some lean adults and absence in others with obesity suggests that more detailed measurements of body composition may uncover abdominal phenotypes of type 2 diabetes. With artificial intelligence (AI) and computed tomography (CT), we can now leverage robust image segmentation to extract detailed measurements of size, shape, and tissue composition from abdominal organs, abdominal muscle, and abdominal fat depots in 3D clinical imaging at scale. This creates an opportunity to empirically define body composition signatures linked to type 2 diabetes risk and protection using large-scale clinical data.&lt;/p&gt;&lt;p&gt;&lt;strong&gt;Approach: &lt;/strong&gt;We studied imaging records of 1728 de-identified patients from Vanderbilt University Medical Center with BMI collected from the electronic health record. To uncover BMI-specific diabetic abdominal patterns from clinical CT, we applied our design four times: once on the full cohort ( &lt;math&gt;&lt;mrow&gt;&lt;mi&gt;n&lt;/mi&gt; &lt;mo&gt;=&lt;/mo&gt; &lt;mn&gt;1728&lt;/mn&gt;&lt;/mrow&gt; &lt;/math&gt; ) and once on lean ( &lt;math&gt;&lt;mrow&gt;&lt;mi&gt;n&lt;/mi&gt; &lt;mo&gt;=&lt;/mo&gt; &lt;mn&gt;497&lt;/mn&gt;&lt;/mrow&gt; &lt;/math&gt; ), overweight ( &lt;math&gt;&lt;mrow&gt;&lt;mi&gt;n&lt;/mi&gt; &lt;mo&gt;=&lt;/mo&gt; &lt;mn&gt;611&lt;/mn&gt;&lt;/mrow&gt; &lt;/math&gt; ), and obese ( &lt;math&gt;&lt;mrow&gt;&lt;mi&gt;n&lt;/mi&gt; &lt;mo&gt;=&lt;/mo&gt; &lt;mn&gt;620&lt;/mn&gt;&lt;/mrow&gt; &lt;/math&gt; ) subgroups separately. Briefly, our experimental design transforms abdominal scans into collections of explainable measurements, identifies which measurements most strongly predict type 2 diabetes and how they contribute to risk or protection, groups scans by shared model decision patterns, and links those decision patterns back to interpretable abdominal phenotypes in the original explainable measurement space of the abdomen using the following steps. (1) To capture abdominal composition: we represented each scan as a collection of 88 automatically extracted measurements of the size, shape, and fat content of abdominal structures using TotalSegmentator. (2) To learn key predictors: we trained a 10-fold cross-validated random forest classifier with SHapley Additive exPlanations (SHAP) analysis to rank features and estimate their risk-versus-protective effects for type 2 diabetes. (3) To validate individual effects: for the 20 highest-ranked features, we ran univariate logistic regressions to quantify their independent associations with type 2 diabetes. (4) To identify decision-making patterns: we embedded the top-20 SHAP profiles with uniform manifold approximation and projection and applied silhouette-guided K-means to cluster the random forest's decision space. (5) To link decisions to abdominal phenotypes: we fit one-versus-rest classifiers on the original anatomical measurements from each decision cluster and applied a second SHAP analysis to explore whether the random forest's logic had identified abdominal phenotypes.&lt;/p&gt;&lt;p&gt;&lt;strong&gt;Results: &lt;/strong&gt;Across the full, lean, overweight, and obese cohort","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"12 6","pages":"064006"},"PeriodicalIF":1.7,"publicationDate":"2025-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12712129/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145806053","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Interpretable convolutional neural network for autism diagnosis support in children using structural magnetic resonance imaging datasets. 基于结构磁共振成像数据集的可解释卷积神经网络对儿童自闭症诊断的支持。
IF 1.7 Q3 RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING Pub Date : 2025-11-01 Epub Date: 2025-11-08 DOI: 10.1117/1.JMI.12.6.064501
Garazi Casillas Martinez, Anthony Winder, Emma A M Stanley, Raissa Souza, Matthias Wilms, Myka Estes, Sarah J MacEachern, Nils D Forkert

Purpose: Autism is one of the most common neurodevelopmental conditions, and it is characterized by restricted, repetitive behaviors and social difficulties that affect daily functioning. It is challenging to provide an early and accurate diagnosis due to the wide diversity of symptoms and the developmental changes that occur during childhood. We evaluate the feasibility of an explainable deep learning (DL) model using structural MRI (sMRI) to identify meaningful brain biomarkers relevant to autism in children and thus support its diagnosis.

Approach: A total of 452 T 1 -weighted sMRI scans from children aged 9 to 11 years were obtained from the Autism Brain Imaging Data Exchange database. A DL model was trained to differentiate between autistic and typically developing children. Model explainability was assessed using saliency maps to identify key brain regions contributing to classification. Model performance was evaluated across 20 folds and compared with traditional machine learning models trained with regional volumetric features extracted from the sMRI scans.

Results: The model achieved a mean area under the receiver operating curve of 71.2%. The saliency maps highlighted brain regions that are known neuroanatomical and functional biomarkers associated with autism, such as the cuneus, pericalcarine, ventricles, lingual, vermal lobules, caudate, and thalamus.

Conclusions: We show the potential of interpretable DL models trained on sMRI data to aid in autism diagnosis within a narrowly defined pediatric age group. Our findings contribute to the field of explainable artificial intelligence methods in neurodevelopmental research and may help in clinical decision-making for autism and other neurodevelopmental conditions.

目的:自闭症是最常见的神经发育疾病之一,其特征是受限制的、重复的行为和影响日常功能的社交困难。由于症状的多样性和儿童时期发生的发育变化,提供早期和准确的诊断具有挑战性。我们使用结构MRI (sMRI)评估可解释的深度学习(DL)模型的可行性,以识别与儿童自闭症相关的有意义的大脑生物标志物,从而支持其诊断。方法:从自闭症脑成像数据交换数据库中获得9至11岁儿童共452张t1加权sMRI扫描。训练DL模型来区分自闭症儿童和正常发育儿童。使用显著性图评估模型的可解释性,以确定有助于分类的关键大脑区域。模型的性能被评估了20倍,并与传统的机器学习模型进行了比较,这些模型是用从sMRI扫描中提取的区域体积特征训练的。结果:该模型在受试者工作曲线下的平均面积为71.2%。这些显著性图突出了已知的与自闭症相关的神经解剖学和功能生物标志物的大脑区域,如楔叶、脑室、舌小叶、颊小叶、尾状叶和丘脑。结论:我们展示了在sMRI数据上训练的可解释DL模型的潜力,以帮助在狭窄定义的儿科年龄组中进行自闭症诊断。我们的发现有助于神经发育研究中可解释的人工智能方法领域,并可能有助于自闭症和其他神经发育疾病的临床决策。
{"title":"Interpretable convolutional neural network for autism diagnosis support in children using structural magnetic resonance imaging datasets.","authors":"Garazi Casillas Martinez, Anthony Winder, Emma A M Stanley, Raissa Souza, Matthias Wilms, Myka Estes, Sarah J MacEachern, Nils D Forkert","doi":"10.1117/1.JMI.12.6.064501","DOIUrl":"10.1117/1.JMI.12.6.064501","url":null,"abstract":"<p><strong>Purpose: </strong>Autism is one of the most common neurodevelopmental conditions, and it is characterized by restricted, repetitive behaviors and social difficulties that affect daily functioning. It is challenging to provide an early and accurate diagnosis due to the wide diversity of symptoms and the developmental changes that occur during childhood. We evaluate the feasibility of an explainable deep learning (DL) model using structural MRI (sMRI) to identify meaningful brain biomarkers relevant to autism in children and thus support its diagnosis.</p><p><strong>Approach: </strong>A total of 452 <math><mrow><mi>T</mi> <mn>1</mn></mrow> </math> -weighted sMRI scans from children aged 9 to 11 years were obtained from the Autism Brain Imaging Data Exchange database. A DL model was trained to differentiate between autistic and typically developing children. Model explainability was assessed using saliency maps to identify key brain regions contributing to classification. Model performance was evaluated across 20 folds and compared with traditional machine learning models trained with regional volumetric features extracted from the sMRI scans.</p><p><strong>Results: </strong>The model achieved a mean area under the receiver operating curve of 71.2%. The saliency maps highlighted brain regions that are known neuroanatomical and functional biomarkers associated with autism, such as the cuneus, pericalcarine, ventricles, lingual, vermal lobules, caudate, and thalamus.</p><p><strong>Conclusions: </strong>We show the potential of interpretable DL models trained on sMRI data to aid in autism diagnosis within a narrowly defined pediatric age group. Our findings contribute to the field of explainable artificial intelligence methods in neurodevelopmental research and may help in clinical decision-making for autism and other neurodevelopmental conditions.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"12 6","pages":"064501"},"PeriodicalIF":1.7,"publicationDate":"2025-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12596041/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145483379","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Journal of Medical Imaging
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1