Automatic Hippocampal Subfield Segmentation from 3T Multi-modality Images.

Zhengwang Wu, Yaozong Gao, Feng Shi, Valerie Jewells, Dinggang Shen
{"title":"Automatic Hippocampal Subfield Segmentation from 3T Multi-modality Images.","authors":"Zhengwang Wu, Yaozong Gao, Feng Shi, Valerie Jewells, Dinggang Shen","doi":"10.1007/978-3-319-47157-0_28","DOIUrl":null,"url":null,"abstract":"<p><p>Hippocampal subfields play important and divergent roles in both memory formation and early diagnosis of many neurological diseases, but automatic subfield segmentation is less explored due to its small size and poor image contrast. In this paper, we propose an automatic learning-based hippocampal subfields segmentation framework using multi-modality 3TMR images, including T1 MRI and resting-state fMRI (rs-fMRI). To do this, we first acquire both 3T and 7T T1 MRIs for each training subject, and then the 7T T1 MRI are linearly registered onto the 3T T1 MRI. Six hippocampal subfields are manually labeled on the aligned 7T T1 MRI, which has the 7T image contrast but sits in the 3T T1 space. Next, corresponding appearance and relationship features from both 3T T1 MRI and rs-fMRI are extracted to train a structured random forest as a multi-label classifier to conduct the segmentation. Finally, the subfield segmentation is further refined iteratively by additional context features and updated relationship features. To our knowledge, this is the first work that addresses the challenging automatic hippocampal subfields segmentation using 3T routine T1 MRI and rs-fMRI. The quantitative comparison between our results and manual ground truth demonstrates the effectiveness of our method. Besides, we also find that (a) multi-modality features significantly improved subfield segmentation performance due to the complementary information among modalities; (b) automatic segmentation results using 3T multimodality images are partially comparable to those on 7T T1 MRI.</p>","PeriodicalId":74092,"journal":{"name":"Machine learning in medical imaging. MLMI (Workshop)","volume":" ","pages":"229-236"},"PeriodicalIF":0.0000,"publicationDate":"2016-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5464731/pdf/nihms833106.pdf","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Machine learning in medical imaging. MLMI (Workshop)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1007/978-3-319-47157-0_28","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Hippocampal subfields play important and divergent roles in both memory formation and early diagnosis of many neurological diseases, but automatic subfield segmentation is less explored due to its small size and poor image contrast. In this paper, we propose an automatic learning-based hippocampal subfields segmentation framework using multi-modality 3TMR images, including T1 MRI and resting-state fMRI (rs-fMRI). To do this, we first acquire both 3T and 7T T1 MRIs for each training subject, and then the 7T T1 MRI are linearly registered onto the 3T T1 MRI. Six hippocampal subfields are manually labeled on the aligned 7T T1 MRI, which has the 7T image contrast but sits in the 3T T1 space. Next, corresponding appearance and relationship features from both 3T T1 MRI and rs-fMRI are extracted to train a structured random forest as a multi-label classifier to conduct the segmentation. Finally, the subfield segmentation is further refined iteratively by additional context features and updated relationship features. To our knowledge, this is the first work that addresses the challenging automatic hippocampal subfields segmentation using 3T routine T1 MRI and rs-fMRI. The quantitative comparison between our results and manual ground truth demonstrates the effectiveness of our method. Besides, we also find that (a) multi-modality features significantly improved subfield segmentation performance due to the complementary information among modalities; (b) automatic segmentation results using 3T multimodality images are partially comparable to those on 7T T1 MRI.

Abstract Image

Abstract Image

Abstract Image

查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
从 3T 多模态图像自动分割海马子场
海马亚场在记忆形成和多种神经疾病的早期诊断中发挥着重要而多样的作用,但由于其体积小、图像对比度差,自动亚场分割的研究较少。在本文中,我们利用多模态 3TMR 图像,包括 T1 MRI 和静息态 fMRI(rs-fMRI),提出了一种基于自动学习的海马子场分割框架。为此,我们首先获取每个训练对象的 3T 和 7T T1 MRI 图像,然后将 7T T1 MRI 图像线性注册到 3T T1 MRI 图像上。在对齐的 7T T1 MRI 上手动标注六个海马亚区,该亚区具有 7T 图像对比度,但位于 3T T1 空间内。然后,从 3T T1 MRI 和 rs-fMRI 中提取相应的外观和关系特征,训练结构化随机森林作为多标签分类器来进行分割。最后,通过附加的上下文特征和更新的关系特征迭代进一步完善子场分割。据我们所知,这是第一项利用 3T 常规 T1 MRI 和 rs-fMRI 解决具有挑战性的海马亚场自动分割问题的研究。我们的结果与人工地面实况的定量比较证明了我们方法的有效性。此外,我们还发现:(a)多模态特征由于模态间信息的互补性而显著提高了子场分割性能;(b)使用 3T 多模态图像的自动分割结果与使用 7T T1 MRI 的自动分割结果具有部分可比性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Probabilistic 3D Correspondence Prediction from Sparse Unsegmented Images. Class-Balanced Deep Learning with Adaptive Vector Scaling Loss for Dementia Stage Detection. MoViT: Memorizing Vision Transformers for Medical Image Analysis. Robust Unsupervised Super-Resolution of Infant MRI via Dual-Modal Deep Image Prior. IA-GCN: Interpretable Attention based Graph Convolutional Network for Disease Prediction.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1