Structural MRI Harmonization via Disentangled Latent Energy-Based Style Translation.

Mengqi Wu, Lintao Zhang, Pew-Thian Yap, Weili Lin, Hongtu Zhu, Mingxia Liu
{"title":"Structural MRI Harmonization via Disentangled Latent Energy-Based Style Translation.","authors":"Mengqi Wu, Lintao Zhang, Pew-Thian Yap, Weili Lin, Hongtu Zhu, Mingxia Liu","doi":"10.1007/978-3-031-45673-2_1","DOIUrl":null,"url":null,"abstract":"<p><p>Multi-site brain magnetic resonance imaging (MRI) has been widely used in clinical and research domains, but usually is sensitive to non-biological variations caused by site effects (<i>e.g.</i>, field strengths and scanning protocols). Several retrospective data harmonization methods have shown promising results in removing these non-biological variations at feature or whole-image level. Most existing image-level harmonization methods are implemented through generative adversarial networks, which are generally computationally expensive and generalize poorly on independent data. To this end, this paper proposes a disentangled latent energy-based style translation (DLEST) framework for image-level structural MRI harmonization. Specifically, DLEST disentangles <i>site-invariant image generation</i> and <i>site-specific style translation</i> via a latent autoencoder and an energy-based model. The autoencoder learns to encode images into low-dimensional latent space, and generates faithful images from latent codes. The energy-based model is placed in between the encoding and generation steps, facilitating style translation from a source domain to a target domain implicitly. This allows <i>highly generalizable image generation and efficient style translation</i> through the latent space. We train our model on 4,092 T1-weighted MRIs in 3 tasks: histogram comparison, acquisition site classification, and brain tissue segmentation. Qualitative and quantitative results demonstrate the superiority of our approach, which generally outperforms several state-of-the-art methods.</p>","PeriodicalId":74092,"journal":{"name":"Machine learning in medical imaging. MLMI (Workshop)","volume":"14348 ","pages":"1-11"},"PeriodicalIF":0.0000,"publicationDate":"2023-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10883146/pdf/","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Machine learning in medical imaging. MLMI (Workshop)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1007/978-3-031-45673-2_1","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2023/10/15 0:00:00","PubModel":"Epub","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Multi-site brain magnetic resonance imaging (MRI) has been widely used in clinical and research domains, but usually is sensitive to non-biological variations caused by site effects (e.g., field strengths and scanning protocols). Several retrospective data harmonization methods have shown promising results in removing these non-biological variations at feature or whole-image level. Most existing image-level harmonization methods are implemented through generative adversarial networks, which are generally computationally expensive and generalize poorly on independent data. To this end, this paper proposes a disentangled latent energy-based style translation (DLEST) framework for image-level structural MRI harmonization. Specifically, DLEST disentangles site-invariant image generation and site-specific style translation via a latent autoencoder and an energy-based model. The autoencoder learns to encode images into low-dimensional latent space, and generates faithful images from latent codes. The energy-based model is placed in between the encoding and generation steps, facilitating style translation from a source domain to a target domain implicitly. This allows highly generalizable image generation and efficient style translation through the latent space. We train our model on 4,092 T1-weighted MRIs in 3 tasks: histogram comparison, acquisition site classification, and brain tissue segmentation. Qualitative and quantitative results demonstrate the superiority of our approach, which generally outperforms several state-of-the-art methods.

查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
通过基于潜能的风格翻译进行结构磁共振成像协调。
多部位脑磁共振成像(MRI)已广泛应用于临床和研究领域,但通常对部位效应(如场强和扫描协议)引起的非生物变异很敏感。有几种回顾性数据协调方法在特征或整个图像层面消除这些非生物变异方面取得了可喜的成果。现有的大多数图像级协调方法都是通过生成式对抗网络实现的,这种网络通常计算成本高,对独立数据的泛化能力差。为此,本文提出了一种基于潜能的风格转换(DLEST)框架,用于图像级结构磁共振成像协调。具体来说,DLEST 通过一个潜在自动编码器和一个基于能量的模型,将部位不变的图像生成和特定部位的风格转换分离开来。自动编码器学习将图像编码到低维潜在空间,并从潜在代码生成忠实图像。基于能量的模型被置于编码和生成步骤之间,促进了从源域到目标域的隐式风格转换。这使得图像生成具有高度的通用性,并能通过潜空间进行高效的风格转换。我们在 4,092 张 T1 加权核磁共振图像上对模型进行了 3 项任务的训练:直方图比较、采集部位分类和脑组织分割。定性和定量结果都证明了我们方法的优越性,总体上优于几种最先进的方法。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Probabilistic 3D Correspondence Prediction from Sparse Unsegmented Images. Class-Balanced Deep Learning with Adaptive Vector Scaling Loss for Dementia Stage Detection. MoViT: Memorizing Vision Transformers for Medical Image Analysis. Robust Unsupervised Super-Resolution of Infant MRI via Dual-Modal Deep Image Prior. IA-GCN: Interpretable Attention based Graph Convolutional Network for Disease Prediction.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1