Deep Learning for Fast and Spatially-Constrained Tissue Quantification from Highly-Undersampled Data in Magnetic Resonance Fingerprinting (MRF).

Zhenghan Fang, Yong Chen, Mingxia Liu, Yiqiang Zhan, Weili Lin, Dinggang Shen
{"title":"Deep Learning for Fast and Spatially-Constrained Tissue Quantification from Highly-Undersampled Data in Magnetic Resonance Fingerprinting (MRF).","authors":"Zhenghan Fang,&nbsp;Yong Chen,&nbsp;Mingxia Liu,&nbsp;Yiqiang Zhan,&nbsp;Weili Lin,&nbsp;Dinggang Shen","doi":"10.1007/978-3-030-00919-9_46","DOIUrl":null,"url":null,"abstract":"<p><p>Magnetic resonance fingerprinting (MRF) is a novel quantitative imaging technique that allows simultaneous measurements of multiple important tissue properties in human body, e.g., T1 and T2 relaxation times. While MRF has demonstrated better scan efficiency as compared to conventional quantitative imaging techniques, further acceleration is desired, especially for certain subjects such as infants and young children. However, the conventional MRF framework only uses a simple template matching algorithm to quantify tissue properties, without considering the underlying spatial association among pixels in MRF signals. In this work, we aim to accelerate MRF acquisition by developing a new post-processing method that allows accurate quantification of tissue properties with <i>fewer</i> sampling data. Moreover, to improve the accuracy in quantification, the MRF signals from multiple surrounding pixels are used together to better estimate tissue properties at the central target pixel, which was simply done with the signal only from the target pixel in the original template matching method. In particular, a deep learning model, i.e., U-Net, is used to learn the mapping from the MRF signal evolutions to the tissue property map. To further reduce the network size of U-Net, principal component analysis (PCA) is used to reduce the dimensionality of the input signals. Based on <i>in vivo</i> brain data, our method can achieve accurate quantification for both T1 and T2 by using only 25% time points, which are <i>four times</i> of acceleration in data acquisition compared to the original template matching method.</p>","PeriodicalId":74092,"journal":{"name":"Machine learning in medical imaging. MLMI (Workshop)","volume":"11046 ","pages":"398-405"},"PeriodicalIF":0.0000,"publicationDate":"2018-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1007/978-3-030-00919-9_46","citationCount":"5","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Machine learning in medical imaging. MLMI (Workshop)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1007/978-3-030-00919-9_46","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2018/9/15 0:00:00","PubModel":"Epub","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 5

Abstract

Magnetic resonance fingerprinting (MRF) is a novel quantitative imaging technique that allows simultaneous measurements of multiple important tissue properties in human body, e.g., T1 and T2 relaxation times. While MRF has demonstrated better scan efficiency as compared to conventional quantitative imaging techniques, further acceleration is desired, especially for certain subjects such as infants and young children. However, the conventional MRF framework only uses a simple template matching algorithm to quantify tissue properties, without considering the underlying spatial association among pixels in MRF signals. In this work, we aim to accelerate MRF acquisition by developing a new post-processing method that allows accurate quantification of tissue properties with fewer sampling data. Moreover, to improve the accuracy in quantification, the MRF signals from multiple surrounding pixels are used together to better estimate tissue properties at the central target pixel, which was simply done with the signal only from the target pixel in the original template matching method. In particular, a deep learning model, i.e., U-Net, is used to learn the mapping from the MRF signal evolutions to the tissue property map. To further reduce the network size of U-Net, principal component analysis (PCA) is used to reduce the dimensionality of the input signals. Based on in vivo brain data, our method can achieve accurate quantification for both T1 and T2 by using only 25% time points, which are four times of acceleration in data acquisition compared to the original template matching method.

Abstract Image

Abstract Image

Abstract Image

查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
磁共振指纹(MRF)中基于高度欠采样数据的快速和空间受限组织定量的深度学习。
磁共振指纹(MRF)是一种新型的定量成像技术,可以同时测量人体多种重要的组织特性,例如T1和T2弛豫时间。虽然与传统的定量成像技术相比,磁共振成像已经证明了更好的扫描效率,但需要进一步的加速,特别是对于某些特定的受试者,如婴儿和幼儿。然而,传统的MRF框架仅使用简单的模板匹配算法来量化组织特性,而没有考虑MRF信号中像素之间的潜在空间关联。在这项工作中,我们的目标是通过开发一种新的后处理方法来加速磁共振成像的获取,这种方法可以用更少的采样数据准确地定量组织特性。此外,为了提高量化的准确性,将来自多个周围像素的磁共振成像信号一起使用,以更好地估计中心目标像素处的组织特性,而原始模板匹配方法只是简单地使用来自目标像素的信号。特别地,使用深度学习模型,即U-Net,来学习从MRF信号演变到组织属性映射的映射。为了进一步减小U-Net的网络规模,采用主成分分析(PCA)对输入信号进行降维处理。基于活体脑数据,我们的方法仅使用25%的时间点就能实现T1和T2的准确定量,与原始模板匹配方法相比,数据采集速度提高了4倍。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Probabilistic 3D Correspondence Prediction from Sparse Unsegmented Images. Class-Balanced Deep Learning with Adaptive Vector Scaling Loss for Dementia Stage Detection. MoViT: Memorizing Vision Transformers for Medical Image Analysis. Robust Unsupervised Super-Resolution of Infant MRI via Dual-Modal Deep Image Prior. IA-GCN: Interpretable Attention based Graph Convolutional Network for Disease Prediction.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1