FetalBrainAwareNet: Bridging GANs with anatomical insight for fetal ultrasound brain plane synthesis

IF 5.4 2区 医学 Q1 ENGINEERING, BIOMEDICAL Computerized Medical Imaging and Graphics Pub Date : 2024-05-28 DOI:10.1016/j.compmedimag.2024.102405
Angelo Lasala , Maria Chiara Fiorentino , Andrea Bandini , Sara Moccia
{"title":"FetalBrainAwareNet: Bridging GANs with anatomical insight for fetal ultrasound brain plane synthesis","authors":"Angelo Lasala ,&nbsp;Maria Chiara Fiorentino ,&nbsp;Andrea Bandini ,&nbsp;Sara Moccia","doi":"10.1016/j.compmedimag.2024.102405","DOIUrl":null,"url":null,"abstract":"<div><p>Over the past decade, deep-learning (DL) algorithms have become a promising tool to aid clinicians in identifying fetal head standard planes (FHSPs) during ultrasound (US) examination. However, the adoption of these algorithms in clinical settings is still hindered by the lack of large annotated datasets. To overcome this barrier, we introduce FetalBrainAwareNet, an innovative framework designed to synthesize anatomically accurate images of FHSPs. FetalBrainAwareNet introduces a cutting-edge approach that utilizes class activation maps as a prior in its conditional adversarial training process. This approach fosters the presence of the specific anatomical landmarks in the synthesized images. Additionally, we investigate specialized regularization terms within the adversarial training loss function to control the morphology of the fetal skull and foster the differentiation between the standard planes, ensuring that the synthetic images faithfully represent real US scans in both structure and overall appearance. The versatility of our FetalBrainAwareNet framework is highlighted by its ability to generate high-quality images of three predominant FHSPs using a singular, integrated framework. Quantitative (Fréchet inception distance of 88.52) and qualitative (t-SNE) results suggest that our framework generates US images with greater variability compared to state-of-the-art methods. By using the synthetic images generated with our framework, we increase the accuracy of FHSP classifiers by 3.2% compared to training the same classifiers solely with real acquisitions. These achievements suggest that using our synthetic images to increase the training set could provide benefits to enhance the performance of DL algorithms for FHSPs classification that could be integrated in real clinical scenarios.</p></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":"116 ","pages":"Article 102405"},"PeriodicalIF":5.4000,"publicationDate":"2024-05-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Computerized Medical Imaging and Graphics","FirstCategoryId":"5","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S089561112400082X","RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ENGINEERING, BIOMEDICAL","Score":null,"Total":0}
引用次数: 0

Abstract

Over the past decade, deep-learning (DL) algorithms have become a promising tool to aid clinicians in identifying fetal head standard planes (FHSPs) during ultrasound (US) examination. However, the adoption of these algorithms in clinical settings is still hindered by the lack of large annotated datasets. To overcome this barrier, we introduce FetalBrainAwareNet, an innovative framework designed to synthesize anatomically accurate images of FHSPs. FetalBrainAwareNet introduces a cutting-edge approach that utilizes class activation maps as a prior in its conditional adversarial training process. This approach fosters the presence of the specific anatomical landmarks in the synthesized images. Additionally, we investigate specialized regularization terms within the adversarial training loss function to control the morphology of the fetal skull and foster the differentiation between the standard planes, ensuring that the synthetic images faithfully represent real US scans in both structure and overall appearance. The versatility of our FetalBrainAwareNet framework is highlighted by its ability to generate high-quality images of three predominant FHSPs using a singular, integrated framework. Quantitative (Fréchet inception distance of 88.52) and qualitative (t-SNE) results suggest that our framework generates US images with greater variability compared to state-of-the-art methods. By using the synthetic images generated with our framework, we increase the accuracy of FHSP classifiers by 3.2% compared to training the same classifiers solely with real acquisitions. These achievements suggest that using our synthetic images to increase the training set could provide benefits to enhance the performance of DL algorithms for FHSPs classification that could be integrated in real clinical scenarios.

查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
FetalBrainAwareNet:为胎儿超声脑平面合成架起 GAN 与解剖学洞察力之间的桥梁。
在过去十年中,深度学习(DL)算法已成为帮助临床医生在超声波(US)检查中识别胎儿头部标准平面(FHSPs)的一种前景广阔的工具。然而,由于缺乏大型注释数据集,这些算法在临床环境中的应用仍然受到阻碍。为了克服这一障碍,我们引入了胎儿脑感知网络(FetalBrainAwareNet),这是一个创新的框架,旨在合成解剖学上准确的胎儿头颅平面图像。FetalBrainAwareNet 引入了一种前沿方法,在条件对抗训练过程中利用类激活图作为先验。这种方法有助于在合成图像中出现特定的解剖地标。此外,我们还研究了对抗训练损失函数中的专门正则化项,以控制胎儿头骨的形态,促进标准平面之间的区分,确保合成图像在结构和整体外观上忠实再现真实的 US 扫描图像。我们的 FetalBrainAwareNet 框架的多功能性体现在它能够利用一个单一的集成框架生成三种主要 FHSP 的高质量图像。定量(弗雷谢特起始距离为 88.52)和定性(t-SNE)结果表明,与最先进的方法相比,我们的框架生成的 US 图像具有更大的可变性。通过使用我们的框架生成的合成图像,我们将 FHSP 分类器的准确率提高了 3.2%,而仅使用真实采集图像训练相同分类器的准确率则降低了 3.2%。这些成果表明,使用我们的合成图像来增加训练集可以提高用于 FHSP 分类的 DL 算法的性能,并将其整合到实际临床场景中。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
CiteScore
10.70
自引率
3.50%
发文量
71
审稿时长
26 days
期刊介绍: The purpose of the journal Computerized Medical Imaging and Graphics is to act as a source for the exchange of research results concerning algorithmic advances, development, and application of digital imaging in disease detection, diagnosis, intervention, prevention, precision medicine, and population health. Included in the journal will be articles on novel computerized imaging or visualization techniques, including artificial intelligence and machine learning, augmented reality for surgical planning and guidance, big biomedical data visualization, computer-aided diagnosis, computerized-robotic surgery, image-guided therapy, imaging scanning and reconstruction, mobile and tele-imaging, radiomics, and imaging integration and modeling with other information relevant to digital health. The types of biomedical imaging include: magnetic resonance, computed tomography, ultrasound, nuclear medicine, X-ray, microwave, optical and multi-photon microscopy, video and sensory imaging, and the convergence of biomedical images with other non-imaging datasets.
期刊最新文献
Exploring transformer reliability in clinically significant prostate cancer segmentation: A comprehensive in-depth investigation. DSIFNet: Implicit feature network for nasal cavity and vestibule segmentation from 3D head CT AFSegNet: few-shot 3D ankle-foot bone segmentation via hierarchical feature distillation and multi-scale attention and fusion VLFATRollout: Fully transformer-based classifier for retinal OCT volumes WISE: Efficient WSI selection for active learning in histopathology
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1