Synthesizing Contrast-Enhanced MR Images from Noncontrast MR Images Using Deep Learning

IF 3.1 3区 医学 Q2 CLINICAL NEUROLOGY American Journal of Neuroradiology Pub Date : 2024-03-01 DOI:10.3174/ajnr.a8107
Gowtham Murugesan, Fang F. Yu, Michael Achilleos, John DeBevits, Sahil Nalawade, Chandan Ganesh, Ben Wagner, Ananth J Madhuranthakam, Joseph A. Maldjian
{"title":"Synthesizing Contrast-Enhanced MR Images from Noncontrast MR Images Using Deep Learning","authors":"Gowtham Murugesan, Fang F. Yu, Michael Achilleos, John DeBevits, Sahil Nalawade, Chandan Ganesh, Ben Wagner, Ananth J Madhuranthakam, Joseph A. Maldjian","doi":"10.3174/ajnr.a8107","DOIUrl":null,"url":null,"abstract":"<sec><st>BACKGROUND AND PURPOSE:</st>\n<p>Recent developments in deep learning methods offer a potential solution to the need for alternative imaging methods due to concerns about the toxicity of gadolinium-based contrast agents. The purpose of the study was to synthesize virtual gadolinium contrast-enhanced T1-weighted MR images from noncontrast multiparametric MR images in patients with primary brain tumors by using deep learning.</p>\n</sec>\n<sec><st>MATERIALS AND METHODS:</st>\n<p>We trained and validated a deep learning network by using MR images from 335 subjects in the Brain Tumor Segmentation Challenge 2019 training data set. A held out set of 125 subjects from the Brain Tumor Segmentation Challenge 2019 validation data set was used to test the generalization of the model. A residual inception DenseNet network, called T1c-ET, was developed and trained to simultaneously synthesize virtual contrast-enhanced T1-weighted (vT1c) images and segment the enhancing portions of the tumor. Three expert neuroradiologists independently scored the synthesized vT1c images by using a 3-point Likert scale, evaluating image quality and contrast enhancement against ground truth T1c images (1 = poor, 2 = good, 3 = excellent).</p>\n</sec>\n<sec><st>RESULTS:</st>\n<p>The synthesized vT1c images achieved structural similarity index, peak signal-to-noise ratio, and normalized mean square error scores of 0.91, 64.35, and 0.03, respectively. There was moderate interobserver agreement between the 3 raters, regarding the algorithm&rsquo;s performance in predicting contrast enhancement, with a Fleiss kappa value of 0.61. Our model was able to accurately predict contrast enhancement in 88.8% of the cases (scores of 2 to 3 on the 3-point scale).</p>\n</sec>\n<sec><st>CONCLUSIONS:</st>\n<p>We developed a novel deep learning architecture to synthesize virtual postcontrast enhancement by using only conventional noncontrast brain MR images. Our results demonstrate the potential of deep learning methods to reduce the need for gadolinium contrast in the evaluation of primary brain tumors.</p>\n</sec>","PeriodicalId":7875,"journal":{"name":"American Journal of Neuroradiology","volume":null,"pages":null},"PeriodicalIF":3.1000,"publicationDate":"2024-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"American Journal of Neuroradiology","FirstCategoryId":"3","ListUrlMain":"https://doi.org/10.3174/ajnr.a8107","RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"CLINICAL NEUROLOGY","Score":null,"Total":0}
引用次数: 0

Abstract

BACKGROUND AND PURPOSE:

Recent developments in deep learning methods offer a potential solution to the need for alternative imaging methods due to concerns about the toxicity of gadolinium-based contrast agents. The purpose of the study was to synthesize virtual gadolinium contrast-enhanced T1-weighted MR images from noncontrast multiparametric MR images in patients with primary brain tumors by using deep learning.

MATERIALS AND METHODS:

We trained and validated a deep learning network by using MR images from 335 subjects in the Brain Tumor Segmentation Challenge 2019 training data set. A held out set of 125 subjects from the Brain Tumor Segmentation Challenge 2019 validation data set was used to test the generalization of the model. A residual inception DenseNet network, called T1c-ET, was developed and trained to simultaneously synthesize virtual contrast-enhanced T1-weighted (vT1c) images and segment the enhancing portions of the tumor. Three expert neuroradiologists independently scored the synthesized vT1c images by using a 3-point Likert scale, evaluating image quality and contrast enhancement against ground truth T1c images (1 = poor, 2 = good, 3 = excellent).

RESULTS:

The synthesized vT1c images achieved structural similarity index, peak signal-to-noise ratio, and normalized mean square error scores of 0.91, 64.35, and 0.03, respectively. There was moderate interobserver agreement between the 3 raters, regarding the algorithm’s performance in predicting contrast enhancement, with a Fleiss kappa value of 0.61. Our model was able to accurately predict contrast enhancement in 88.8% of the cases (scores of 2 to 3 on the 3-point scale).

CONCLUSIONS:

We developed a novel deep learning architecture to synthesize virtual postcontrast enhancement by using only conventional noncontrast brain MR images. Our results demonstrate the potential of deep learning methods to reduce the need for gadolinium contrast in the evaluation of primary brain tumors.

查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
利用深度学习从非对比度增强型 MR 图像合成对比度增强型 MR 图像
背景和目的:深度学习方法的最新发展为人们对基于钆的造影剂毒性的担忧所导致的对替代成像方法的需求提供了一种潜在的解决方案。材料与方法:我们使用2019年脑肿瘤分割挑战赛(Brain Tumor Segmentation Challenge 2019)训练数据集中335名受试者的MR图像,训练并验证了深度学习网络。2019年脑肿瘤分割挑战赛验证数据集中的125名受试者被用于测试模型的泛化。开发并训练了一个名为 T1c-ET 的残余阈值 DenseNet 网络,用于同时合成虚拟对比度增强 T1 加权(vT1c)图像并分割肿瘤的增强部分。结果:合成的 vT1c 图像的结构相似性指数、峰值信噪比和归一化均方误差分别为 0.91、64.35 和 0.03。在预测对比度增强的算法性能方面,3 位评分者之间的观察者间意见基本一致,Fleiss kappa 值为 0.61。我们的模型能够在 88.8% 的病例中准确预测对比度增强(3 分制中的 2 到 3 分)。我们的研究结果证明了深度学习方法在评估原发性脑肿瘤时减少对钆对比剂需求的潜力。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
CiteScore
7.10
自引率
5.70%
发文量
506
审稿时长
2 months
期刊介绍: The mission of AJNR is to further knowledge in all aspects of neuroimaging, head and neck imaging, and spine imaging for neuroradiologists, radiologists, trainees, scientists, and associated professionals through print and/or electronic publication of quality peer-reviewed articles that lead to the highest standards in patient care, research, and education and to promote discussion of these and other issues through its electronic activities.
期刊最新文献
Artificial Intelligence-Generated Editorials in Radiology: Can Expert Editors Detect Them? The many faces of myxopapillary ependynomas. Vestibular schwannoma-related increased labyrinthine post-gadolinium 3D-FLAIR signal intensity and association with hearing impairment. CNS Embryonal Tumor with PLAGL Amplification, a New Tumor Type in Children and Adolescents: Insights from a Comprehensive MRI Analysis. DSA Quantitative Analysis and Predictive Modeling of Obliteration in Cerebral AVM following Stereotactic Radiosurgery.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1