多模态脑MRI分析的表征解缠。

Jiahong Ouyang, Ehsan Adeli, Kilian M Pohl, Qingyu Zhao, Greg Zaharchuk
{"title":"多模态脑MRI分析的表征解缠。","authors":"Jiahong Ouyang,&nbsp;Ehsan Adeli,&nbsp;Kilian M Pohl,&nbsp;Qingyu Zhao,&nbsp;Greg Zaharchuk","doi":"10.1007/978-3-030-78191-0_25","DOIUrl":null,"url":null,"abstract":"<p><p>Multi-modal MRIs are widely used in neuroimaging applications since different MR sequences provide complementary information about brain structures. Recent works have suggested that multi-modal deep learning analysis can benefit from explicitly disentangling anatomical (shape) and modality (appearance) information into separate image presentations. In this work, we challenge mainstream strategies by showing that they do not naturally lead to representation disentanglement both in theory and in practice. To address this issue, we propose a margin loss that regularizes the similarity in relationships of the representations across subjects and modalities. To enable robust training, we further use a conditional convolution to design a single model for encoding images of all modalities. Lastly, we propose a fusion function to combine the disentangled anatomical representations as a set of modality-invariant features for downstream tasks. We evaluate the proposed method on three multi-modal neuroimaging datasets. Experiments show that our proposed method can achieve superior disentangled representations compared to existing disentanglement strategies. Results also indicate that the fused anatomical representation has potential in the downstream task of zero-dose PET reconstruction and brain tumor segmentation.</p>","PeriodicalId":73379,"journal":{"name":"Information processing in medical imaging : proceedings of the ... conference","volume":" ","pages":"321-333"},"PeriodicalIF":0.0000,"publicationDate":"2021-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8844656/pdf/nihms-1776957.pdf","citationCount":"22","resultStr":"{\"title\":\"Representation Disentanglement for Multi-modal Brain MRI Analysis.\",\"authors\":\"Jiahong Ouyang,&nbsp;Ehsan Adeli,&nbsp;Kilian M Pohl,&nbsp;Qingyu Zhao,&nbsp;Greg Zaharchuk\",\"doi\":\"10.1007/978-3-030-78191-0_25\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><p>Multi-modal MRIs are widely used in neuroimaging applications since different MR sequences provide complementary information about brain structures. Recent works have suggested that multi-modal deep learning analysis can benefit from explicitly disentangling anatomical (shape) and modality (appearance) information into separate image presentations. In this work, we challenge mainstream strategies by showing that they do not naturally lead to representation disentanglement both in theory and in practice. To address this issue, we propose a margin loss that regularizes the similarity in relationships of the representations across subjects and modalities. To enable robust training, we further use a conditional convolution to design a single model for encoding images of all modalities. Lastly, we propose a fusion function to combine the disentangled anatomical representations as a set of modality-invariant features for downstream tasks. We evaluate the proposed method on three multi-modal neuroimaging datasets. Experiments show that our proposed method can achieve superior disentangled representations compared to existing disentanglement strategies. Results also indicate that the fused anatomical representation has potential in the downstream task of zero-dose PET reconstruction and brain tumor segmentation.</p>\",\"PeriodicalId\":73379,\"journal\":{\"name\":\"Information processing in medical imaging : proceedings of the ... conference\",\"volume\":\" \",\"pages\":\"321-333\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2021-06-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8844656/pdf/nihms-1776957.pdf\",\"citationCount\":\"22\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Information processing in medical imaging : proceedings of the ... conference\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1007/978-3-030-78191-0_25\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"2021/6/14 0:00:00\",\"PubModel\":\"Epub\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Information processing in medical imaging : proceedings of the ... conference","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1007/978-3-030-78191-0_25","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2021/6/14 0:00:00","PubModel":"Epub","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 22

摘要

多模态核磁共振被广泛应用于神经成像应用,因为不同的核磁共振序列提供了大脑结构的互补信息。最近的研究表明,多模态深度学习分析可以从明确地将解剖(形状)和模态(外观)信息分离到单独的图像表示中受益。在这项工作中,我们通过表明它们在理论和实践中都不会自然地导致表征解纠缠来挑战主流策略。为了解决这个问题,我们提出了一个边际损失,它使跨主体和模态的表征关系中的相似性规范化。为了实现鲁棒性训练,我们进一步使用条件卷积来设计一个单一的模型来编码所有模态的图像。最后,我们提出了一个融合函数,将解纠缠的解剖表示组合为一组模态不变的特征,用于下游任务。我们在三个多模态神经成像数据集上评估了所提出的方法。实验表明,与现有的解纠缠策略相比,我们提出的方法可以获得更好的解纠缠表示。结果还表明,融合的解剖表征在后续的零剂量PET重建和脑肿瘤分割任务中具有潜在的应用价值。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Representation Disentanglement for Multi-modal Brain MRI Analysis.

Multi-modal MRIs are widely used in neuroimaging applications since different MR sequences provide complementary information about brain structures. Recent works have suggested that multi-modal deep learning analysis can benefit from explicitly disentangling anatomical (shape) and modality (appearance) information into separate image presentations. In this work, we challenge mainstream strategies by showing that they do not naturally lead to representation disentanglement both in theory and in practice. To address this issue, we propose a margin loss that regularizes the similarity in relationships of the representations across subjects and modalities. To enable robust training, we further use a conditional convolution to design a single model for encoding images of all modalities. Lastly, we propose a fusion function to combine the disentangled anatomical representations as a set of modality-invariant features for downstream tasks. We evaluate the proposed method on three multi-modal neuroimaging datasets. Experiments show that our proposed method can achieve superior disentangled representations compared to existing disentanglement strategies. Results also indicate that the fused anatomical representation has potential in the downstream task of zero-dose PET reconstruction and brain tumor segmentation.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Vicinal Feature Statistics Augmentation for Federated 3D Medical Volume Segmentation Better Generalization of White Matter Tract Segmentation to Arbitrary Datasets with Scaled Residual Bootstrap Unsupervised Adaptation of Polyp Segmentation Models via Coarse-to-Fine Self-Supervision Weakly Semi-supervised Detection in Lung Ultrasound Videos Bootstrapping Semi-supervised Medical Image Segmentation with Anatomical-Aware Contrastive Distillation.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1