胎儿-BET:胎儿磁共振成像脑提取工具

IF 2.7 Q3 ENGINEERING, BIOMEDICAL IEEE Open Journal of Engineering in Medicine and Biology Pub Date : 2024-07-12 DOI:10.1109/OJEMB.2024.3426969
Razieh Faghihpirayesh;Davood Karimi;Deniz Erdoğmuş;Ali Gholipour
{"title":"胎儿-BET:胎儿磁共振成像脑提取工具","authors":"Razieh Faghihpirayesh;Davood Karimi;Deniz Erdoğmuş;Ali Gholipour","doi":"10.1109/OJEMB.2024.3426969","DOIUrl":null,"url":null,"abstract":"<italic>Goal:</i>\n In this study, we address the critical challenge of fetal brain extraction from MRI sequences. Fetal MRI has played a crucial role in prenatal neurodevelopmental studies and in advancing our knowledge of fetal brain development \n<italic>in-utero</i>\n. Fetal brain extraction is a necessary first step in most computational fetal brain MRI pipelines. However, it poses significant challenges due to 1) non-standard fetal head positioning, 2) fetal movements during examination, and 3) vastly heterogeneous appearance of the developing fetal brain and the neighboring fetal and maternal anatomy across gestation, and with various sequences and scanning conditions. Development of a machine learning method to effectively address this task requires a large and rich labeled dataset that has not been previously available. Currently, there is no method for accurate fetal brain extraction on various fetal MRI sequences. \n<italic>Methods:</i>\n In this work, we first built a large annotated dataset of approximately 72,000 2D fetal brain MRI images. Our dataset covers the three common MRI sequences including T2-weighted, diffusion-weighted, and functional MRI acquired with different scanners. These data include images of normal and pathological brains. Using this dataset, we developed and validated deep learning methods, by exploiting the power of the U-Net style architectures, the attention mechanism, feature learning across multiple MRI modalities, and data augmentation for fast, accurate, and generalizable automatic fetal brain extraction. \n<italic>Results:</i>\n Evaluations on independent test data, including data available from other centers, show that our method achieves accurate brain extraction on heterogeneous test data acquired with different scanners, on pathological brains, and at various gestational stages. \n<italic>Conclusions:</i>\nBy leveraging rich information from diverse multi-modality fetal MRI data, our proposed deep learning solution enables precise delineation of the fetal brain on various fetal MRI sequences. The robustness of our deep learning model underscores its potential utility for fetal brain imaging.","PeriodicalId":33825,"journal":{"name":"IEEE Open Journal of Engineering in Medicine and Biology","volume":null,"pages":null},"PeriodicalIF":2.7000,"publicationDate":"2024-07-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10596549","citationCount":"0","resultStr":"{\"title\":\"Fetal-BET: Brain Extraction Tool for Fetal MRI\",\"authors\":\"Razieh Faghihpirayesh;Davood Karimi;Deniz Erdoğmuş;Ali Gholipour\",\"doi\":\"10.1109/OJEMB.2024.3426969\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<italic>Goal:</i>\\n In this study, we address the critical challenge of fetal brain extraction from MRI sequences. Fetal MRI has played a crucial role in prenatal neurodevelopmental studies and in advancing our knowledge of fetal brain development \\n<italic>in-utero</i>\\n. Fetal brain extraction is a necessary first step in most computational fetal brain MRI pipelines. However, it poses significant challenges due to 1) non-standard fetal head positioning, 2) fetal movements during examination, and 3) vastly heterogeneous appearance of the developing fetal brain and the neighboring fetal and maternal anatomy across gestation, and with various sequences and scanning conditions. Development of a machine learning method to effectively address this task requires a large and rich labeled dataset that has not been previously available. Currently, there is no method for accurate fetal brain extraction on various fetal MRI sequences. \\n<italic>Methods:</i>\\n In this work, we first built a large annotated dataset of approximately 72,000 2D fetal brain MRI images. Our dataset covers the three common MRI sequences including T2-weighted, diffusion-weighted, and functional MRI acquired with different scanners. These data include images of normal and pathological brains. Using this dataset, we developed and validated deep learning methods, by exploiting the power of the U-Net style architectures, the attention mechanism, feature learning across multiple MRI modalities, and data augmentation for fast, accurate, and generalizable automatic fetal brain extraction. \\n<italic>Results:</i>\\n Evaluations on independent test data, including data available from other centers, show that our method achieves accurate brain extraction on heterogeneous test data acquired with different scanners, on pathological brains, and at various gestational stages. \\n<italic>Conclusions:</i>\\nBy leveraging rich information from diverse multi-modality fetal MRI data, our proposed deep learning solution enables precise delineation of the fetal brain on various fetal MRI sequences. The robustness of our deep learning model underscores its potential utility for fetal brain imaging.\",\"PeriodicalId\":33825,\"journal\":{\"name\":\"IEEE Open Journal of Engineering in Medicine and Biology\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":2.7000,\"publicationDate\":\"2024-07-12\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10596549\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE Open Journal of Engineering in Medicine and Biology\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://ieeexplore.ieee.org/document/10596549/\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q3\",\"JCRName\":\"ENGINEERING, BIOMEDICAL\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Open Journal of Engineering in Medicine and Biology","FirstCategoryId":"1085","ListUrlMain":"https://ieeexplore.ieee.org/document/10596549/","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"ENGINEERING, BIOMEDICAL","Score":null,"Total":0}
引用次数: 0

摘要

目标:在这项研究中,我们解决了从核磁共振成像序列中提取胎儿大脑的关键难题。胎儿核磁共振成像在产前神经发育研究中发挥了至关重要的作用,并推动了我们对胎儿体内大脑发育知识的了解。胎儿脑提取是大多数计算胎儿脑磁共振成像管道中必要的第一步。然而,由于 1)非标准的胎儿头部定位;2)检查过程中的胎儿运动;3)不同妊娠期、不同序列和扫描条件下发育中的胎儿大脑以及邻近胎儿和母体解剖结构的外观差异巨大,这给胎儿大脑提取带来了巨大挑战。要开发一种机器学习方法来有效地完成这项任务,需要大量丰富的标注数据集,而这些数据集是以前所没有的。目前,还没有一种方法能在各种胎儿 MRI 序列上准确提取胎儿大脑。方法:在这项工作中,我们首先建立了一个包含约 72,000 张二维胎儿脑部 MRI 图像的大型标注数据集。我们的数据集涵盖了三种常见的磁共振成像序列,包括用不同扫描仪获取的 T2 加权、弥散加权和功能磁共振成像。这些数据包括正常和病理大脑的图像。利用这个数据集,我们开发并验证了深度学习方法,通过利用 U-Net 风格架构的强大功能、注意力机制、跨多种 MRI 模式的特征学习以及数据增强,实现了快速、准确和可推广的胎儿大脑自动提取。结果:在独立测试数据(包括其他中心提供的数据)上进行的评估表明,我们的方法能在不同扫描仪获取的异构测试数据、病理大脑和不同妊娠阶段准确提取大脑。结论:通过利用来自不同多模态胎儿磁共振成像数据的丰富信息,我们提出的深度学习解决方案能够在各种胎儿磁共振成像序列上精确划分胎儿大脑。我们的深度学习模型的鲁棒性强调了它在胎儿大脑成像中的潜在用途。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Fetal-BET: Brain Extraction Tool for Fetal MRI
Goal: In this study, we address the critical challenge of fetal brain extraction from MRI sequences. Fetal MRI has played a crucial role in prenatal neurodevelopmental studies and in advancing our knowledge of fetal brain development in-utero . Fetal brain extraction is a necessary first step in most computational fetal brain MRI pipelines. However, it poses significant challenges due to 1) non-standard fetal head positioning, 2) fetal movements during examination, and 3) vastly heterogeneous appearance of the developing fetal brain and the neighboring fetal and maternal anatomy across gestation, and with various sequences and scanning conditions. Development of a machine learning method to effectively address this task requires a large and rich labeled dataset that has not been previously available. Currently, there is no method for accurate fetal brain extraction on various fetal MRI sequences. Methods: In this work, we first built a large annotated dataset of approximately 72,000 2D fetal brain MRI images. Our dataset covers the three common MRI sequences including T2-weighted, diffusion-weighted, and functional MRI acquired with different scanners. These data include images of normal and pathological brains. Using this dataset, we developed and validated deep learning methods, by exploiting the power of the U-Net style architectures, the attention mechanism, feature learning across multiple MRI modalities, and data augmentation for fast, accurate, and generalizable automatic fetal brain extraction. Results: Evaluations on independent test data, including data available from other centers, show that our method achieves accurate brain extraction on heterogeneous test data acquired with different scanners, on pathological brains, and at various gestational stages. Conclusions: By leveraging rich information from diverse multi-modality fetal MRI data, our proposed deep learning solution enables precise delineation of the fetal brain on various fetal MRI sequences. The robustness of our deep learning model underscores its potential utility for fetal brain imaging.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
CiteScore
9.50
自引率
3.40%
发文量
20
审稿时长
10 weeks
期刊介绍: The IEEE Open Journal of Engineering in Medicine and Biology (IEEE OJEMB) is dedicated to serving the community of innovators in medicine, technology, and the sciences, with the core goal of advancing the highest-quality interdisciplinary research between these disciplines. The journal firmly believes that the future of medicine depends on close collaboration between biology and technology, and that fostering interaction between these fields is an important way to advance key discoveries that can improve clinical care.IEEE OJEMB is a gold open access journal in which the authors retain the copyright to their papers and readers have free access to the full text and PDFs on the IEEE Xplore® Digital Library. However, authors are required to pay an article processing fee at the time their paper is accepted for publication, using to cover the cost of publication.
期刊最新文献
Corrections to “Gastric Section Correlation Network for Gastric Precancerous Lesion Diagnosis” IEEE Open Journal of Engineering in Medicine and Biology Editorial Board Information IEEE Open Journal of Engineering in Medicine and Biology Author Instructions Guest Editorial: Introduction to the Special Series on Advances in Cardiovascular and Respiratory Systems Engineering Front Cover
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1