首页 > 最新文献

Journal of Medical Imaging最新文献

英文 中文
Scribble-supervised method for cardiac tissue segmentation using position and temporal contrastive information. 利用位置和时间对比信息进行心脏组织分割的潦草监督方法。
IF 1.7 Q3 RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING Pub Date : 2025-11-01 Epub Date: 2025-11-24 DOI: 10.1117/1.JMI.12.6.064002
Xiaoxuan Ma, Yingao Du, Kuncheng Lian

Purpose: Accurate pixel-level segmentation is essential for medical image analysis, particularly in assisting diagnosis and treatment planning. However, fully supervised learning methods rely heavily on high-quality annotated data, which are often scarce due to the high cost of manual labeling, privacy concerns, and limited availability. We aim to reduce reliance on precise annotations and improve segmentation performance under weak supervision.

Approach: We propose scribble position and temporal contrast learning (SPTCL), an innovative segmentation method that combines contrastive learning with weak supervision. Our method leverages the spatial continuity in 3D medical image volumes and the anatomical similarities across different cardiac phases to construct a contrastive learning task for robust feature representation from unlabeled data. To enhance the feature extraction capabilities, we employ a pre-trained encoder, which is initially trained on the ACDC dataset using contrastive learning to capture robust feature representations. This pre-trained encoder is then transferred to a weakly supervised segmentation network with a dual-branch decoder for further fine-tuning on the task. The predictions from both branches are fused to generate refined pseudo-labels, which are iteratively used to guide network training with only coarse scribble annotations.

Results: Experiments on the ACDC dataset show that SPTCL outperforms existing models, achieving a Dice coefficient of 90.5%, with a 2.5% improvement over the baseline and a 1.7% improvement over the latest model. Furthermore, SPTCL reduces training time by 33 % .

Conclusions: SPTCL effectively addresses the challenges of limited annotation in medical image segmentation by uniting contrastive learning with weak supervision. It demonstrates strong potential for practical deployment in clinical settings where high-quality labels are difficult to obtain.

目的:准确的像素级分割对医学图像分析至关重要,特别是在辅助诊断和治疗计划方面。然而,完全监督学习方法严重依赖于高质量的注释数据,由于人工标注的高成本、隐私问题和有限的可用性,这些数据通常是稀缺的。我们的目标是减少对精确标注的依赖,提高弱监督下的分割性能。方法:我们提出了一种将对比学习与弱监督相结合的创新分割方法——涂鸦位置与时间对比学习(SPTCL)。我们的方法利用三维医学图像体积的空间连续性和不同心脏阶段的解剖相似性来构建一个对比学习任务,从未标记的数据中获得鲁棒特征表示。为了增强特征提取能力,我们采用了一个预训练的编码器,该编码器最初在ACDC数据集上使用对比学习进行训练,以捕获鲁棒特征表示。然后将该预训练编码器转移到带有双分支解码器的弱监督分割网络中,对任务进行进一步微调。来自两个分支的预测被融合以生成精细的伪标签,这些伪标签迭代地用于指导网络训练,仅使用粗糙的潦草注释。结果:在ACDC数据集上的实验表明,SPTCL优于现有模型,Dice系数达到90.5%,比基线提高2.5%,比最新模型提高1.7%。此外,SPTCL将训练时间减少了约33%。结论:SPTCL将对比学习与弱监督相结合,有效解决了医学图像分割中标注有限的难题。它展示了在临床环境中实际部署的强大潜力,高质量的标签是难以获得的。
{"title":"Scribble-supervised method for cardiac tissue segmentation using position and temporal contrastive information.","authors":"Xiaoxuan Ma, Yingao Du, Kuncheng Lian","doi":"10.1117/1.JMI.12.6.064002","DOIUrl":"https://doi.org/10.1117/1.JMI.12.6.064002","url":null,"abstract":"<p><strong>Purpose: </strong>Accurate pixel-level segmentation is essential for medical image analysis, particularly in assisting diagnosis and treatment planning. However, fully supervised learning methods rely heavily on high-quality annotated data, which are often scarce due to the high cost of manual labeling, privacy concerns, and limited availability. We aim to reduce reliance on precise annotations and improve segmentation performance under weak supervision.</p><p><strong>Approach: </strong>We propose scribble position and temporal contrast learning (SPTCL), an innovative segmentation method that combines contrastive learning with weak supervision. Our method leverages the spatial continuity in 3D medical image volumes and the anatomical similarities across different cardiac phases to construct a contrastive learning task for robust feature representation from unlabeled data. To enhance the feature extraction capabilities, we employ a pre-trained encoder, which is initially trained on the ACDC dataset using contrastive learning to capture robust feature representations. This pre-trained encoder is then transferred to a weakly supervised segmentation network with a dual-branch decoder for further fine-tuning on the task. The predictions from both branches are fused to generate refined pseudo-labels, which are iteratively used to guide network training with only coarse scribble annotations.</p><p><strong>Results: </strong>Experiments on the ACDC dataset show that SPTCL outperforms existing models, achieving a Dice coefficient of 90.5%, with a 2.5% improvement over the baseline and a 1.7% improvement over the latest model. Furthermore, SPTCL reduces training time by <math><mrow><mo>∼</mo> <mn>33</mn> <mo>%</mo></mrow> </math> .</p><p><strong>Conclusions: </strong>SPTCL effectively addresses the challenges of limited annotation in medical image segmentation by uniting contrastive learning with weak supervision. It demonstrates strong potential for practical deployment in clinical settings where high-quality labels are difficult to obtain.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"12 6","pages":"064002"},"PeriodicalIF":1.7,"publicationDate":"2025-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12640760/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145596901","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Automated coronary calcium detection and scoring on multicenter, multiprotocol noncontrast CT. 多中心、多方案非对比CT自动冠脉钙化检测及评分。
IF 1.7 Q3 RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING Pub Date : 2025-11-01 Epub Date: 2025-11-13 DOI: 10.1117/1.JMI.12.6.064502
Andrew M Nguyen, Jianfei Liu, Tejas Sudharshan Mathai, Peter C Grayson, Perry J Pickhardt, Ronald M Summers

Purpose: Coronary artery disease is the leading global cause of mortality. Automated detection and scoring of calcified plaques can help cardiovascular risk assessment. We propose a deep learning method for automatic detection and scoring of coronary artery calcified plaques on noncontrast CT scans.

Approach: We utilized five datasets from one internal and four external tertiary care institutions, three of them with manually annotated plaques. A coronary artery calcified plaque detection model was developed using the state-of-the-art nnU-Net deep learning framework, incorporating simultaneous segmentation of the aorta, heart, and lungs to reduce false positives. The training data consisted of 641 noncontrast CT scans from three labeled datasets, representing diverse vascular disease etiologies. Agatston scores were automatically computed to quantify plaque burden. The model was tested on 160 labeled CT scans and compared with a previous detection method. In addition, Agatston scores were correlated with patient demographics and clinical outcomes using two unlabeled datasets.

Results: The predicted and reference Agatston scores demonstrated a strong correlation ( r 2 = 0.973 ), with a precision of 89.3%, recall of 89.1%, and an average Dice score of 75.0 ± 16.0 % on the labeled testing datasets. The stratified four Agatston groups achieved 92.0% accuracy and a Cohen's Kappa of 0.913. In the unlabeled datasets, Agatston groups showed significant correlations with the Framingham risk score, cardiovascular disease, heart failure, cancer status, fragility fracture risk, smoking, and age, whereas remaining consistent across race and scanner types.

Conclusions: Coronary artery plaques were accurately detected and segmented using the proposed nnU-Net-based method on noncontrast CT scans. The Agatston-score-based plaque burden assessment facilitates cardiovascular risk stratification, enabling opportunistic screening and population-based studies.

目的:冠状动脉疾病是全球主要的死亡原因。钙化斑块的自动检测和评分有助于心血管风险评估。我们提出了一种深度学习方法,用于在非对比CT扫描中自动检测和评分冠状动脉钙化斑块。方法:我们使用了来自一个内部和四个外部三级医疗机构的五个数据集,其中三个具有手动注释的斑块。使用最先进的nnU-Net深度学习框架开发了冠状动脉钙化斑块检测模型,结合主动脉,心脏和肺部的同时分割以减少假阳性。训练数据包括来自三个标记数据集的641个非对比CT扫描,代表了不同的血管疾病病因。自动计算Agatston评分来量化斑块负担。该模型在160个标记CT扫描上进行了测试,并与之前的检测方法进行了比较。此外,使用两个未标记的数据集,Agatston评分与患者人口统计学和临床结果相关。结果:预测Agatston评分与参考Agatston评分具有较强的相关性(r 2 = 0.973),在标记的测试数据集上,准确率为89.3%,召回率为89.1%,平均Dice评分为75.0±16.0%。分层的四个Agatston组的准确率为92.0%,Cohen’s Kappa为0.913。在未标记的数据集中,Agatston组显示出与Framingham风险评分、心血管疾病、心力衰竭、癌症状态、脆性骨折风险、吸烟和年龄的显著相关性,而在种族和扫描仪类型之间保持一致。结论:本文提出的基于nnu - net的方法在非对比CT扫描中可以准确地检测和分割冠状动脉斑块。基于agatston评分的斑块负担评估有助于心血管风险分层,使机会筛选和基于人群的研究成为可能。
{"title":"Automated coronary calcium detection and scoring on multicenter, multiprotocol noncontrast CT.","authors":"Andrew M Nguyen, Jianfei Liu, Tejas Sudharshan Mathai, Peter C Grayson, Perry J Pickhardt, Ronald M Summers","doi":"10.1117/1.JMI.12.6.064502","DOIUrl":"https://doi.org/10.1117/1.JMI.12.6.064502","url":null,"abstract":"<p><strong>Purpose: </strong>Coronary artery disease is the leading global cause of mortality. Automated detection and scoring of calcified plaques can help cardiovascular risk assessment. We propose a deep learning method for automatic detection and scoring of coronary artery calcified plaques on noncontrast CT scans.</p><p><strong>Approach: </strong>We utilized five datasets from one internal and four external tertiary care institutions, three of them with manually annotated plaques. A coronary artery calcified plaque detection model was developed using the state-of-the-art nnU-Net deep learning framework, incorporating simultaneous segmentation of the aorta, heart, and lungs to reduce false positives. The training data consisted of 641 noncontrast CT scans from three labeled datasets, representing diverse vascular disease etiologies. Agatston scores were automatically computed to quantify plaque burden. The model was tested on 160 labeled CT scans and compared with a previous detection method. In addition, Agatston scores were correlated with patient demographics and clinical outcomes using two unlabeled datasets.</p><p><strong>Results: </strong>The predicted and reference Agatston scores demonstrated a strong correlation ( <math> <mrow><msup><mi>r</mi> <mn>2</mn></msup> <mo>=</mo> <mn>0.973</mn></mrow> </math> ), with a precision of 89.3%, recall of 89.1%, and an average Dice score of <math><mrow><mn>75.0</mn> <mo>±</mo> <mn>16.0</mn> <mo>%</mo></mrow> </math> on the labeled testing datasets. The stratified four Agatston groups achieved 92.0% accuracy and a Cohen's Kappa of 0.913. In the unlabeled datasets, Agatston groups showed significant correlations with the Framingham risk score, cardiovascular disease, heart failure, cancer status, fragility fracture risk, smoking, and age, whereas remaining consistent across race and scanner types.</p><p><strong>Conclusions: </strong>Coronary artery plaques were accurately detected and segmented using the proposed nnU-Net-based method on noncontrast CT scans. The Agatston-score-based plaque burden assessment facilitates cardiovascular risk stratification, enabling opportunistic screening and population-based studies.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"12 6","pages":"064502"},"PeriodicalIF":1.7,"publicationDate":"2025-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12614904/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145542952","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Longitudinal outcome prediction of prostate cancer patients on active surveillance using multiple instance learning. 多实例学习用于前列腺癌患者主动监测的纵向预后预测。
IF 1.7 Q3 RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING Pub Date : 2025-11-01 Epub Date: 2025-10-14 DOI: 10.1117/1.JMI.12.6.061408
Filip Winzell, Ida Arvidsson, Kalle Åström, Niels Christian Overgaard, Felicia-Elena Marginean, Athanasios Simoulis, Anders Bjartell, Agnieszka Krzyzanowska, Anders Heyden

Purpose: To avoid over-treatment of prostate cancer patients following screening for elevated prostate-specific antigen levels, keeping patients on active surveillance has been suggested as an alternative to radical treatment. This means recurring visits for patients with low-grade cancer to monitor progression. Our aim was to develop an artificial intelligence-based model that can identify high-risk patients in a cohort of prostate cancer patients on active surveillance.

Approach: We have developed a multiple instance learning-based framework for predicting the longitudinal outcomes for prostate cancer patients on active surveillance. Our models were trained only on whole-slide images with patient-level labels without using explicit Gleason grades. We employed the UNI-2 foundation model and the well-established attention-based multiple instance learning approach. We further evaluated our models by fitting Cox proportional hazards models and testing them on an external dataset.

Results: With this approach, we achieved an average area under the receiver operator characteristic curve of 0.958 (95% CI, 0.957 to 0.959). Fitting Cox models to the predicted probabilities achieved a C -index of 0.824 and a hazard ratio of 2.32. However, all models showed a large drop in performance when evaluated on an external dataset.

Conclusion: We show that avoiding Gleason grades is beneficial for longitudinal outcome prediction of prostate cancer. Our results suggest that benign prostate tissue contains prognostic information. However, before our models could be used clinically, much more work remains to improve the generalization.

目的:为了避免前列腺癌患者在筛查前列腺特异性抗原水平升高后过度治疗,建议对患者进行主动监测,作为根治性治疗的替代方案。这意味着对低级别癌症患者的反复访问,以监测进展。我们的目标是开发一种基于人工智能的模型,可以在主动监测的前列腺癌患者队列中识别高风险患者。方法:我们开发了一个基于多实例学习的框架,用于预测主动监测前列腺癌患者的纵向结果。我们的模型仅在带有患者级别标签的整张幻灯片图像上进行训练,没有使用明确的Gleason分级。我们采用UNI-2基础模型和完善的基于注意的多实例学习方法。我们通过拟合Cox比例风险模型并在外部数据集上进行测试来进一步评估我们的模型。结果:采用该方法,受试者特征曲线下的平均面积为0.958 (95% CI, 0.957 ~ 0.959)。Cox模型拟合预测概率的C指数为0.824,风险比为2.32。然而,当在外部数据集上进行评估时,所有模型的性能都出现了大幅下降。结论:我们表明避免格里森分级有利于前列腺癌的纵向预后预测。我们的结果提示良性前列腺组织包含预后信息。然而,在我们的模型可以用于临床之前,还有很多工作要做,以提高泛化。
{"title":"Longitudinal outcome prediction of prostate cancer patients on active surveillance using multiple instance learning.","authors":"Filip Winzell, Ida Arvidsson, Kalle Åström, Niels Christian Overgaard, Felicia-Elena Marginean, Athanasios Simoulis, Anders Bjartell, Agnieszka Krzyzanowska, Anders Heyden","doi":"10.1117/1.JMI.12.6.061408","DOIUrl":"https://doi.org/10.1117/1.JMI.12.6.061408","url":null,"abstract":"<p><strong>Purpose: </strong>To avoid over-treatment of prostate cancer patients following screening for elevated prostate-specific antigen levels, keeping patients on active surveillance has been suggested as an alternative to radical treatment. This means recurring visits for patients with low-grade cancer to monitor progression. Our aim was to develop an artificial intelligence-based model that can identify high-risk patients in a cohort of prostate cancer patients on active surveillance.</p><p><strong>Approach: </strong>We have developed a multiple instance learning-based framework for predicting the longitudinal outcomes for prostate cancer patients on active surveillance. Our models were trained only on whole-slide images with patient-level labels without using explicit Gleason grades. We employed the UNI-2 foundation model and the well-established attention-based multiple instance learning approach. We further evaluated our models by fitting Cox proportional hazards models and testing them on an external dataset.</p><p><strong>Results: </strong>With this approach, we achieved an average area under the receiver operator characteristic curve of 0.958 (95% CI, 0.957 to 0.959). Fitting Cox models to the predicted probabilities achieved a <math><mrow><mi>C</mi></mrow> </math> -index of 0.824 and a hazard ratio of 2.32. However, all models showed a large drop in performance when evaluated on an external dataset.</p><p><strong>Conclusion: </strong>We show that avoiding Gleason grades is beneficial for longitudinal outcome prediction of prostate cancer. Our results suggest that benign prostate tissue contains prognostic information. However, before our models could be used clinically, much more work remains to improve the generalization.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"12 6","pages":"061408"},"PeriodicalIF":1.7,"publicationDate":"2025-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12518054/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145304258","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Enhancing breast cancer detection on screening mammogram using self-supervised learning and a hybrid deep model of Swin Transformer and convolutional neural networks. 基于自监督学习和Swin Transformer与卷积神经网络混合深度模型的乳房x光筛查增强乳腺癌检测。
IF 1.9 Q3 RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING Pub Date : 2025-11-01 Epub Date: 2025-05-14 DOI: 10.1117/1.JMI.12.S2.S22007
Han Chen, Anne L Martel

Purpose: The scarcity of high-quality curated labeled medical training data remains one of the major limitations in applying artificial intelligence systems to breast cancer diagnosis. Deep models for mammogram analysis and mass (or micro-calcification) detection require training with a large volume of labeled images, which are often expensive and time-consuming to collect. To reduce this challenge, we proposed a method that leverages self-supervised learning (SSL) and a deep hybrid model, named HybMNet, which combines local self-attention and fine-grained feature extraction to enhance breast cancer detection on screening mammograms.

Approach: Our method employs a two-stage learning process: (1) SSL pretraining: We utilize Efficient Self-Supervised Vision Transformers, an SSL technique, to pretrain a Swin Transformer (Swin-T) using a limited set of mammograms. The pretrained Swin-T then serves as the backbone for the downstream task. (2) Downstream training: The proposed HybMNet combines the Swin-T backbone with a convolutional neural network (CNN)-based network and a fusion strategy. The Swin-T employs local self-attention to identify informative patch regions from the high-resolution mammogram, whereas the CNN-based network extracts fine-grained local features from the selected patches. A fusion module then integrates global and local information from both networks to generate robust predictions. The HybMNet is trained end-to-end, with the loss function combining the outputs of the Swin-T and CNN modules to optimize feature extraction and classification performance.

Results: The proposed method was evaluated for its ability to detect breast cancer by distinguishing between benign (normal) and malignant mammograms. Leveraging SSL pretraining and the HybMNet model, it achieved an area under the ROC curve of 0.864 (95% CI: 0.852, 0.875) on the Chinese Mammogram Database (CMMD) dataset and 0.889 (95% CI: 0.875, 0.903) on the INbreast dataset, highlighting its effectiveness.

Conclusions: The quantitative results highlight the effectiveness of our proposed HybMNet and the SSL pretraining approach. In addition, visualizations of the selected region of interest patches show the model's potential for weakly supervised detection of microcalcifications, despite being trained using only image-level labels.

目的:缺乏高质量的精心策划的标记医学训练数据仍然是人工智能系统应用于乳腺癌诊断的主要限制之一。用于乳房x光片分析和肿块(或微钙化)检测的深度模型需要使用大量标记图像进行训练,而这些图像的收集通常既昂贵又耗时。为了减少这一挑战,我们提出了一种利用自监督学习(SSL)和深度混合模型HybMNet的方法,该模型结合了局部自关注和细粒度特征提取,以增强乳房x光筛查中的乳腺癌检测。方法:我们的方法采用两个阶段的学习过程:(1)SSL预训练:我们利用SSL技术高效自监督视觉变压器,使用一组有限的乳房x线照片来预训练Swin变压器(Swin- t)。然后,预训练的swwin - t作为下游任务的主干。(2)下游训练:提出的HybMNet将swwin - t骨干网与基于卷积神经网络(CNN)的网络和融合策略相结合。swwin - t利用局部自关注从高分辨率乳房x光片中识别信息丰富的斑块区域,而基于cnn的网络则从选择的斑块中提取细粒度的局部特征。然后,融合模块将来自两个网络的全球和本地信息集成在一起,生成可靠的预测。HybMNet是端到端训练的,损失函数结合swwin -t和CNN模块的输出来优化特征提取和分类性能。结果:该方法通过区分良性(正常)和恶性乳房x光检查来评估其检测乳腺癌的能力。利用SSL预训练和HybMNet模型,在中国乳房x线摄影数据库(CMMD)数据集上实现了0.864 (95% CI: 0.852, 0.875)的ROC曲线下面积,在INbreast数据集上实现了0.889 (95% CI: 0.875, 0.903),突出了其有效性。结论:定量结果突出了我们提出的HybMNet和SSL预训练方法的有效性。此外,尽管只使用图像级标签进行训练,但所选择的感兴趣斑块区域的可视化显示了该模型对微钙化的弱监督检测的潜力。
{"title":"Enhancing breast cancer detection on screening mammogram using self-supervised learning and a hybrid deep model of Swin Transformer and convolutional neural networks.","authors":"Han Chen, Anne L Martel","doi":"10.1117/1.JMI.12.S2.S22007","DOIUrl":"10.1117/1.JMI.12.S2.S22007","url":null,"abstract":"<p><strong>Purpose: </strong>The scarcity of high-quality curated labeled medical training data remains one of the major limitations in applying artificial intelligence systems to breast cancer diagnosis. Deep models for mammogram analysis and mass (or micro-calcification) detection require training with a large volume of labeled images, which are often expensive and time-consuming to collect. To reduce this challenge, we proposed a method that leverages self-supervised learning (SSL) and a deep hybrid model, named HybMNet, which combines local self-attention and fine-grained feature extraction to enhance breast cancer detection on screening mammograms.</p><p><strong>Approach: </strong>Our method employs a two-stage learning process: (1) SSL pretraining: We utilize Efficient Self-Supervised Vision Transformers, an SSL technique, to pretrain a Swin Transformer (Swin-T) using a limited set of mammograms. The pretrained Swin-T then serves as the backbone for the downstream task. (2) Downstream training: The proposed HybMNet combines the Swin-T backbone with a convolutional neural network (CNN)-based network and a fusion strategy. The Swin-T employs local self-attention to identify informative patch regions from the high-resolution mammogram, whereas the CNN-based network extracts fine-grained local features from the selected patches. A fusion module then integrates global and local information from both networks to generate robust predictions. The HybMNet is trained end-to-end, with the loss function combining the outputs of the Swin-T and CNN modules to optimize feature extraction and classification performance.</p><p><strong>Results: </strong>The proposed method was evaluated for its ability to detect breast cancer by distinguishing between benign (normal) and malignant mammograms. Leveraging SSL pretraining and the HybMNet model, it achieved an area under the ROC curve of 0.864 (95% CI: 0.852, 0.875) on the Chinese Mammogram Database (CMMD) dataset and 0.889 (95% CI: 0.875, 0.903) on the INbreast dataset, highlighting its effectiveness.</p><p><strong>Conclusions: </strong>The quantitative results highlight the effectiveness of our proposed HybMNet and the SSL pretraining approach. In addition, visualizations of the selected region of interest patches show the model's potential for weakly supervised detection of microcalcifications, despite being trained using only image-level labels.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"12 Suppl 2","pages":"S22007"},"PeriodicalIF":1.9,"publicationDate":"2025-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12076021/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144081336","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Simulating dynamic tumor contrast enhancement in breast MRI using conditional generative adversarial networks. 使用条件生成对抗网络模拟乳腺MRI动态肿瘤对比增强。
IF 1.9 Q3 RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING Pub Date : 2025-11-01 Epub Date: 2025-06-28 DOI: 10.1117/1.JMI.12.S2.S22014
Richard Osuala, Smriti Joshi, Apostolia Tsirikoglou, Lidia Garrucho, Walter H L Pinaya, Daniel M Lang, Julia A Schnabel, Oliver Diaz, Karim Lekadir

Purpose: Deep generative models and synthetic data generation have become essential for advancing computer-assisted diagnosis and treatment. We explore one such emerging and particularly promising application of deep generative models, namely, the generation of virtual contrast enhancement. This allows to predict and simulate contrast enhancement in breast magnetic resonance imaging (MRI) without physical contrast agent injection, thereby unlocking lesion localization and categorization even in patient populations where the lengthy, costly, and invasive process of physical contrast agent injection is contraindicated.

Approach: We define a framework for desirable properties of synthetic data, which leads us to propose the scaled aggregate measure (SAMe) consisting of a balanced set of scaled complementary metrics for generative model training and convergence evaluation. We further adopt a conditional generative adversarial network to translate from non-contrast-enhanced T 1 -weighted fat-saturated breast MRI slices to their dynamic contrast-enhanced (DCE) counterparts, thus learning to detect, localize, and adequately highlight breast cancer lesions. Next, we extend our model approach to jointly generate multiple DCE-MRI time points, enabling the simulation of contrast enhancement across temporal DCE-MRI acquisitions. In addition, three-dimensional U-Net tumor segmentation models are implemented and trained on combinations of synthetic and real DCE-MRI data to investigate the effect of data augmentation with synthetic DCE-MRI volumes.

Results: Conducting four main sets of experiments, (i) the variation across single metrics demonstrated the value of SAMe, and (ii) the quality and potential of virtual contrast injection for tumor detection and localization were shown. Segmentation models (iii) augmented with synthetic DCE-MRI data were more robust in the presence of domain shifts between pre-contrast and DCE-MRI domains. The joint synthesis approach of multi-sequence DCE-MRI (iv) resulted in temporally coherent synthetic DCE-MRI sequences and indicated the generative model's capability of learning complex contrast enhancement patterns.

Conclusions: Virtual contrast injection can result in accurate synthetic DCE-MRI images, potentially enhancing breast cancer diagnosis and treatment protocols. We demonstrate that detecting, localizing, and segmenting tumors using synthetic DCE-MRI is feasible and promising, particularly considering patients where contrast agent injection is risky or contraindicated. Jointly generating multiple subsequent DCE-MRI sequences can increase image quality and unlock clinical applications assessing tumor characteristics related to its response to contrast media injection as a pillar for personalized treatment planning.

目的:深度生成模型和合成数据生成已经成为推进计算机辅助诊断和治疗的关键。我们探索了一种新兴的、特别有前途的深度生成模型的应用,即生成虚拟对比度增强。这允许在不注射物理造影剂的情况下预测和模拟乳房磁共振成像(MRI)的对比增强,从而解锁病变定位和分类,即使在那些禁止注射物理造影剂的漫长、昂贵和侵入性过程的患者群体中。方法:我们为合成数据的理想属性定义了一个框架,这使我们提出了由一组平衡的缩放互补度量组成的缩放聚合度量(SAMe),用于生成模型训练和收敛性评估。我们进一步采用条件生成对抗网络将非对比增强t1加权饱和脂肪乳腺MRI切片转化为动态对比增强(DCE)切片,从而学习检测、定位和充分突出乳腺癌病变。接下来,我们扩展了我们的模型方法,共同生成多个DCE-MRI时间点,从而能够模拟跨时间DCE-MRI采集的对比度增强。此外,我们实现了三维U-Net肿瘤分割模型,并在合成和真实DCE-MRI数据的组合上进行了训练,以研究合成DCE-MRI体积增强数据的效果。结果:进行了四组主要实验,(i)单一指标之间的差异证明了SAMe的价值,(ii)显示了虚拟造影剂注射用于肿瘤检测和定位的质量和潜力。使用合成DCE-MRI数据增强的分割模型(iii)在对比前和DCE-MRI区域之间存在区域转移时更加稳健。多序列DCE-MRI的联合合成方法(iv)产生了时间连贯的合成DCE-MRI序列,并表明生成模型具有学习复杂对比度增强模式的能力。结论:虚拟造影剂注射可以获得准确的DCE-MRI合成图像,有可能增强乳腺癌的诊断和治疗方案。我们证明,使用合成DCE-MRI检测、定位和分割肿瘤是可行和有希望的,特别是考虑到注射造影剂有风险或有禁忌的患者。联合生成多个后续DCE-MRI序列可以提高图像质量,解锁临床应用,评估肿瘤对造影剂注射反应的相关特征,作为个性化治疗计划的支柱。
{"title":"Simulating dynamic tumor contrast enhancement in breast MRI using conditional generative adversarial networks.","authors":"Richard Osuala, Smriti Joshi, Apostolia Tsirikoglou, Lidia Garrucho, Walter H L Pinaya, Daniel M Lang, Julia A Schnabel, Oliver Diaz, Karim Lekadir","doi":"10.1117/1.JMI.12.S2.S22014","DOIUrl":"10.1117/1.JMI.12.S2.S22014","url":null,"abstract":"<p><strong>Purpose: </strong>Deep generative models and synthetic data generation have become essential for advancing computer-assisted diagnosis and treatment. We explore one such emerging and particularly promising application of deep generative models, namely, the generation of virtual contrast enhancement. This allows to predict and simulate contrast enhancement in breast magnetic resonance imaging (MRI) without physical contrast agent injection, thereby unlocking lesion localization and categorization even in patient populations where the lengthy, costly, and invasive process of physical contrast agent injection is contraindicated.</p><p><strong>Approach: </strong>We define a framework for desirable properties of synthetic data, which leads us to propose the scaled aggregate measure (SAMe) consisting of a balanced set of scaled complementary metrics for generative model training and convergence evaluation. We further adopt a conditional generative adversarial network to translate from non-contrast-enhanced <math><mrow><mi>T</mi> <mn>1</mn></mrow> </math> -weighted fat-saturated breast MRI slices to their dynamic contrast-enhanced (DCE) counterparts, thus learning to detect, localize, and adequately highlight breast cancer lesions. Next, we extend our model approach to jointly generate multiple DCE-MRI time points, enabling the simulation of contrast enhancement across temporal DCE-MRI acquisitions. In addition, three-dimensional U-Net tumor segmentation models are implemented and trained on combinations of synthetic and real DCE-MRI data to investigate the effect of data augmentation with synthetic DCE-MRI volumes.</p><p><strong>Results: </strong>Conducting four main sets of experiments, (i) the variation across single metrics demonstrated the value of SAMe, and (ii) the quality and potential of virtual contrast injection for tumor detection and localization were shown. Segmentation models (iii) augmented with synthetic DCE-MRI data were more robust in the presence of domain shifts between pre-contrast and DCE-MRI domains. The joint synthesis approach of multi-sequence DCE-MRI (iv) resulted in temporally coherent synthetic DCE-MRI sequences and indicated the generative model's capability of learning complex contrast enhancement patterns.</p><p><strong>Conclusions: </strong>Virtual contrast injection can result in accurate synthetic DCE-MRI images, potentially enhancing breast cancer diagnosis and treatment protocols. We demonstrate that detecting, localizing, and segmenting tumors using synthetic DCE-MRI is feasible and promising, particularly considering patients where contrast agent injection is risky or contraindicated. Jointly generating multiple subsequent DCE-MRI sequences can increase image quality and unlock clinical applications assessing tumor characteristics related to its response to contrast media injection as a pillar for personalized treatment planning.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"12 Suppl 2","pages":"S22014"},"PeriodicalIF":1.9,"publicationDate":"2025-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12205897/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144530364","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Introduction to the JMI Special Issue on Advances in Breast Imaging. 介绍JMI关于乳腺成像进展的特刊。
IF 1.7 Q3 RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING Pub Date : 2025-11-01 Epub Date: 2025-09-10 DOI: 10.1117/1.JMI.12.S2.S22001
Maryellen L Giger, Susan Astley Theodossiadis, Karen Drukker, Hui Li, Andrew D A Maidment, Heather M Whitney

The editorial introduces the JMI Special Issue on Advances in Breast Imaging, reflecting on the current forefront of breast imaging research.

这篇社论介绍了JMI关于乳腺成像进展的特刊,反映了当前乳腺成像研究的前沿。
{"title":"Introduction to the JMI Special Issue on Advances in Breast Imaging.","authors":"Maryellen L Giger, Susan Astley Theodossiadis, Karen Drukker, Hui Li, Andrew D A Maidment, Heather M Whitney","doi":"10.1117/1.JMI.12.S2.S22001","DOIUrl":"10.1117/1.JMI.12.S2.S22001","url":null,"abstract":"<p><p>The editorial introduces the JMI Special Issue on Advances in Breast Imaging, reflecting on the current forefront of breast imaging research.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"12 Suppl 2","pages":"S22001"},"PeriodicalIF":1.7,"publicationDate":"2025-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12422285/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145041826","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Introduction to the JMI Special Section on Computational Pathology. JMI计算病理学特别章节简介。
IF 1.7 Q3 RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING Pub Date : 2025-11-01 Epub Date: 2025-12-16 DOI: 10.1117/1.JMI.12.6.061401
Baowei Fei, Metin Nafi Gurcan, Yuankai Huo, Pinaki Sarder, Aaron Ward
{"title":"Introduction to the JMI Special Section on Computational Pathology.","authors":"Baowei Fei, Metin Nafi Gurcan, Yuankai Huo, Pinaki Sarder, Aaron Ward","doi":"10.1117/1.JMI.12.6.061401","DOIUrl":"https://doi.org/10.1117/1.JMI.12.6.061401","url":null,"abstract":"","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"12 6","pages":"061401"},"PeriodicalIF":1.7,"publicationDate":"2025-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12705466/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145776123","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
HID-CON: weakly supervised intrahepatic cholangiocarcinoma subtype classification of whole slide images using contrastive hidden class detection. HID-CON:弱监督肝内胆管癌亚型分类全幻灯片图像使用对比隐藏类检测。
IF 1.9 Q3 RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING Pub Date : 2025-11-01 Epub Date: 2025-03-12 DOI: 10.1117/1.JMI.12.6.061402
Jing Wei Tan, Kyoungbun Lee, Won-Ki Jeong

Purpose: Biliary tract cancer, also known as intrahepatic cholangiocarcinoma (IHCC), is a rare disease that shows no clear symptoms during its early stage, but its prognosis depends highly on the cancer subtype. Hence, an accurate cancer subtype classification model is necessary to provide better treatment plans to patients and to reduce mortality. However, annotating histopathology images at the pixel or patch level is time-consuming and labor-intensive for giga-pixel whole slide images. To address this problem, we propose a weakly supervised method for classifying IHCC subtypes using only image-level labels.

Approach: The core idea of the proposed method is to detect regions (i.e., subimages or patches) commonly included in all subtypes, which we name the "hidden class," and to remove them via iterative application of contrastive loss and label smoothing. Doing so will enable us to obtain only patches that faithfully represent each subtype, which are then used to train the image-level classification model by multiple instance learning (MIL).

Results: Our method outperforms the state-of-the-art weakly supervised learning methods ABMIL, TransMIL, and DTFD-MIL by 17 % , 18%, and 8%, respectively, and achieves performance comparable to that of supervised methods.

Conclusions: The introduction of a hidden class to represent patches commonly found across all subtypes enhances the accuracy of IHCC classification and addresses the weak labeling problem in histopathology images.

目的:胆道癌又称肝内胆管癌(IHCC),是一种早期无明显症状的罕见疾病,但其预后与肿瘤亚型有很大关系。因此,准确的癌症亚型分类模型是为患者提供更好的治疗方案和降低死亡率所必需的。然而,在像素或斑块水平上注释组织病理学图像对于千兆像素的整张幻灯片图像是耗时且费力的。为了解决这个问题,我们提出了一种弱监督方法,仅使用图像级标签对IHCC亚型进行分类。方法:提出的方法的核心思想是检测通常包含在所有子类型中的区域(即子图像或补丁),我们将其命名为“隐藏类”,并通过对比损失和标签平滑的迭代应用来去除它们。这样做将使我们能够只获得忠实地表示每个子类型的补丁,然后使用这些补丁通过多实例学习(MIL)来训练图像级分类模型。结果:我们的方法比最先进的弱监督学习方法ABMIL, TransMIL和DTFD-MIL分别高出约17%,18%和8%,并且实现了与监督方法相当的性能。结论:引入一个隐藏类来表示所有亚型中常见的斑块,提高了IHCC分类的准确性,并解决了组织病理学图像中标记薄弱的问题。
{"title":"HID-CON: weakly supervised intrahepatic cholangiocarcinoma subtype classification of whole slide images using contrastive hidden class detection.","authors":"Jing Wei Tan, Kyoungbun Lee, Won-Ki Jeong","doi":"10.1117/1.JMI.12.6.061402","DOIUrl":"10.1117/1.JMI.12.6.061402","url":null,"abstract":"<p><strong>Purpose: </strong>Biliary tract cancer, also known as intrahepatic cholangiocarcinoma (IHCC), is a rare disease that shows no clear symptoms during its early stage, but its prognosis depends highly on the cancer subtype. Hence, an accurate cancer subtype classification model is necessary to provide better treatment plans to patients and to reduce mortality. However, annotating histopathology images at the pixel or patch level is time-consuming and labor-intensive for giga-pixel whole slide images. To address this problem, we propose a weakly supervised method for classifying IHCC subtypes using only image-level labels.</p><p><strong>Approach: </strong>The core idea of the proposed method is to detect regions (i.e., subimages or patches) commonly included in all subtypes, which we name the \"hidden class,\" and to remove them via iterative application of contrastive loss and label smoothing. Doing so will enable us to obtain only patches that faithfully represent each subtype, which are then used to train the image-level classification model by multiple instance learning (MIL).</p><p><strong>Results: </strong>Our method outperforms the state-of-the-art weakly supervised learning methods ABMIL, TransMIL, and DTFD-MIL by <math><mrow><mo>∼</mo> <mn>17</mn> <mo>%</mo></mrow> </math> , 18%, and 8%, respectively, and achieves performance comparable to that of supervised methods.</p><p><strong>Conclusions: </strong>The introduction of a hidden class to represent patches commonly found across all subtypes enhances the accuracy of IHCC classification and addresses the weak labeling problem in histopathology images.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"12 6","pages":"061402"},"PeriodicalIF":1.9,"publicationDate":"2025-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11898109/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143626473","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Asymmetric scatter kernel estimation neural network for digital breast tomosynthesis. 数字乳房断层合成的非对称散射核估计神经网络。
IF 1.9 Q3 RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING Pub Date : 2025-11-01 Epub Date: 2025-06-12 DOI: 10.1117/1.JMI.12.S2.S22008
Subong Hyun, Seoyoung Lee, Ilwong Choi, Choul Woo Shin, Seungryong Cho

Purpose: Various deep learning (DL) approaches have been developed for estimating scatter radiation in digital breast tomosynthesis (DBT). Existing DL methods generally employ an end-to-end training approach, overlooking the underlying physics of scatter formation. We propose a deep learning approach inspired by asymmetric scatter kernel superposition to estimate scatter in DBT.

Approach: We use the network to generate the scatter amplitude distribution as well as the scatter kernel width and asymmetric factor map. To account for variations in local breast thickness and shape in DBT projection data, we integrated the Euclidean distance map and projection angle information into the network design for estimating the asymmetric factor.

Results: Systematic experiments on numerical phantom data and physical experimental data demonstrated the outperformance of the proposed approach to UNet-based end-to-end scatter estimation and symmetric kernel-based approaches in terms of signal-to-noise ratio and structure similarity index measure of the resulting scatter corrected images.

Conclusions: The proposed method is believed to have achieved significant advancement in scatter estimation of DBT projections, allowing a robust and reliable physics-informed scatter correction.

目的:各种深度学习(DL)方法被开发用于估计数字乳房断层合成(DBT)中的散射辐射。现有的深度学习方法通常采用端到端训练方法,忽略了散射形成的底层物理。我们提出了一种受非对称散射核叠加启发的深度学习方法来估计DBT中的散射。方法:利用网络生成散点振幅分布、散点核宽度和不对称因子图。为了考虑DBT投影数据中局部乳房厚度和形状的变化,我们将欧几里得距离图和投影角度信息整合到网络设计中,以估计不对称因素。结果:在数值模拟数据和物理实验数据上进行的系统实验表明,本文提出的方法在散射校正图像的信噪比和结构相似指数度量方面优于基于unet的端到端散射估计方法和基于对称核的方法。结论:所提出的方法被认为在DBT投影的散点估计方面取得了重大进展,允许进行稳健可靠的物理信息散点校正。
{"title":"Asymmetric scatter kernel estimation neural network for digital breast tomosynthesis.","authors":"Subong Hyun, Seoyoung Lee, Ilwong Choi, Choul Woo Shin, Seungryong Cho","doi":"10.1117/1.JMI.12.S2.S22008","DOIUrl":"10.1117/1.JMI.12.S2.S22008","url":null,"abstract":"<p><strong>Purpose: </strong>Various deep learning (DL) approaches have been developed for estimating scatter radiation in digital breast tomosynthesis (DBT). Existing DL methods generally employ an end-to-end training approach, overlooking the underlying physics of scatter formation. We propose a deep learning approach inspired by asymmetric scatter kernel superposition to estimate scatter in DBT.</p><p><strong>Approach: </strong>We use the network to generate the scatter amplitude distribution as well as the scatter kernel width and asymmetric factor map. To account for variations in local breast thickness and shape in DBT projection data, we integrated the Euclidean distance map and projection angle information into the network design for estimating the asymmetric factor.</p><p><strong>Results: </strong>Systematic experiments on numerical phantom data and physical experimental data demonstrated the outperformance of the proposed approach to UNet-based end-to-end scatter estimation and symmetric kernel-based approaches in terms of signal-to-noise ratio and structure similarity index measure of the resulting scatter corrected images.</p><p><strong>Conclusions: </strong>The proposed method is believed to have achieved significant advancement in scatter estimation of DBT projections, allowing a robust and reliable physics-informed scatter correction.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"12 Suppl 2","pages":"S22008"},"PeriodicalIF":1.9,"publicationDate":"2025-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12162176/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144303312","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Artificial intelligence in medical imaging diagnosis: are we ready for its clinical implementation? 人工智能在医学影像诊断中的应用:我们准备好临床应用了吗?
IF 1.9 Q3 RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING Pub Date : 2025-11-01 Epub Date: 2025-06-19 DOI: 10.1117/1.JMI.12.6.061405
Oscar Ramos-Soto, Itzel Aranguren, Manuel Carrillo M, Diego Oliva, Sandra E Balderas-Mata

Purpose: We examine the transformative potential of artificial intelligence (AI) in medical imaging diagnosis, focusing on improving diagnostic accuracy and efficiency through advanced algorithms. It addresses the significant challenges preventing immediate clinical adoption of AI, specifically from technical, ethical, and legal perspectives. The aim is to highlight the current state of AI in medical imaging and outline the necessary steps to ensure safe, effective, and ethically sound clinical implementation.

Approach: We conduct a comprehensive discussion, with special emphasis on the technical requirements for robust AI models, the ethical frameworks needed for responsible deployment, and the legal implications, including data privacy and regulatory compliance. Explainable artificial intelligence (XAI) is examined as a means to increase transparency and build trust among healthcare professionals and patients.

Results: The analysis reveals key challenges to AI integration in clinical settings, including the need for extensive high-quality datasets, model reliability, advanced infrastructure, and compliance with regulatory standards. The lack of explainability in AI outputs remains a barrier, with XAI identified as crucial for meeting transparency standards and enhancing trust among end users.

Conclusions: Overcoming these barriers requires a collaborative, multidisciplinary approach to integrate AI into clinical practice responsibly. Addressing technical, ethical, and legal issues will support a softer transition, fostering a more accurate, efficient, and patient-centered healthcare system where AI augments traditional medical practices.

目的:研究人工智能(AI)在医学影像诊断中的变革潜力,重点是通过先进的算法提高诊断的准确性和效率。它解决了阻碍人工智能立即临床应用的重大挑战,特别是从技术、伦理和法律角度。其目的是强调人工智能在医学成像中的现状,并概述确保安全、有效和合乎伦理的临床实施的必要步骤。方法:我们进行了全面的讨论,特别强调强大的人工智能模型的技术要求,负责任的部署所需的道德框架,以及法律影响,包括数据隐私和监管合规。可解释的人工智能(XAI)作为提高透明度和在医疗保健专业人员和患者之间建立信任的一种手段进行了研究。结果:分析揭示了人工智能在临床环境中集成的主要挑战,包括需要广泛的高质量数据集、模型可靠性、先进的基础设施和遵守监管标准。人工智能输出缺乏可解释性仍然是一个障碍,XAI被认为对满足透明度标准和增强最终用户之间的信任至关重要。结论:克服这些障碍需要一种协作的、多学科的方法来负责任地将人工智能整合到临床实践中。解决技术、伦理和法律问题将支持更柔和的过渡,促进更准确、更高效、更以患者为中心的医疗保健系统,人工智能将增强传统医疗实践。
{"title":"Artificial intelligence in medical imaging diagnosis: are we ready for its clinical implementation?","authors":"Oscar Ramos-Soto, Itzel Aranguren, Manuel Carrillo M, Diego Oliva, Sandra E Balderas-Mata","doi":"10.1117/1.JMI.12.6.061405","DOIUrl":"10.1117/1.JMI.12.6.061405","url":null,"abstract":"<p><strong>Purpose: </strong>We examine the transformative potential of artificial intelligence (AI) in medical imaging diagnosis, focusing on improving diagnostic accuracy and efficiency through advanced algorithms. It addresses the significant challenges preventing immediate clinical adoption of AI, specifically from technical, ethical, and legal perspectives. The aim is to highlight the current state of AI in medical imaging and outline the necessary steps to ensure safe, effective, and ethically sound clinical implementation.</p><p><strong>Approach: </strong>We conduct a comprehensive discussion, with special emphasis on the technical requirements for robust AI models, the ethical frameworks needed for responsible deployment, and the legal implications, including data privacy and regulatory compliance. Explainable artificial intelligence (XAI) is examined as a means to increase transparency and build trust among healthcare professionals and patients.</p><p><strong>Results: </strong>The analysis reveals key challenges to AI integration in clinical settings, including the need for extensive high-quality datasets, model reliability, advanced infrastructure, and compliance with regulatory standards. The lack of explainability in AI outputs remains a barrier, with XAI identified as crucial for meeting transparency standards and enhancing trust among end users.</p><p><strong>Conclusions: </strong>Overcoming these barriers requires a collaborative, multidisciplinary approach to integrate AI into clinical practice responsibly. Addressing technical, ethical, and legal issues will support a softer transition, fostering a more accurate, efficient, and patient-centered healthcare system where AI augments traditional medical practices.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"12 6","pages":"061405"},"PeriodicalIF":1.9,"publicationDate":"2025-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12177575/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144477323","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Journal of Medical Imaging
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1