MULTI-DOMAIN LEARNING BY META-LEARNING: TAKING OPTIMAL STEPS IN MULTI-DOMAIN LOSS LANDSCAPES BY INNER-LOOP LEARNING.

Anthony Sicilia, Xingchen Zhao, Davneet S Minhas, Erin E O'Connor, Howard J Aizenstein, William E Klunk, Dana L Tudorascu, Seong Jae Hwang
{"title":"MULTI-DOMAIN LEARNING BY META-LEARNING: TAKING OPTIMAL STEPS IN MULTI-DOMAIN LOSS LANDSCAPES BY INNER-LOOP LEARNING.","authors":"Anthony Sicilia,&nbsp;Xingchen Zhao,&nbsp;Davneet S Minhas,&nbsp;Erin E O'Connor,&nbsp;Howard J Aizenstein,&nbsp;William E Klunk,&nbsp;Dana L Tudorascu,&nbsp;Seong Jae Hwang","doi":"10.1109/ISBI48211.2021.9433977","DOIUrl":null,"url":null,"abstract":"<p><p>We consider a model-agnostic solution to the problem of Multi-Domain Learning (MDL) for multi-modal applications. Many existing MDL techniques are model-dependent solutions which explicitly require nontrivial architectural changes to construct domain-specific modules. Thus, properly applying these MDL techniques for new problems with well-established models, e.g. U-Net for semantic segmentation, may demand various low-level implementation efforts. In this paper, given emerging multi-modal data (e.g., various structural neuroimaging modalities), we aim to enable MDL purely algorithmically so that widely used neural networks can trivially achieve MDL in a model-independent manner. To this end, we consider a weighted loss function and extend it to an effective procedure by employing techniques from the recently active area of learning-to-learn (meta-learning). Specifically, we take inner-loop gradient steps to dynamically estimate posterior distributions over the hyperparameters of our loss function. Thus, our method is <i>model-agnostic</i>, requiring no additional model parameters and no network architecture changes; instead, only a few efficient algorithmic modifications are needed to improve performance in MDL. We demonstrate our solution to a fitting problem in medical imaging, specifically, in the automatic segmentation of white matter hyperintensity (WMH). We look at two neuroimaging modalities (T1-MR and FLAIR) with complementary information fitting for our problem.</p>","PeriodicalId":74566,"journal":{"name":"Proceedings. IEEE International Symposium on Biomedical Imaging","volume":"2021 ","pages":"650-654"},"PeriodicalIF":0.0000,"publicationDate":"2021-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/ISBI48211.2021.9433977","citationCount":"4","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings. IEEE International Symposium on Biomedical Imaging","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ISBI48211.2021.9433977","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2021/5/25 0:00:00","PubModel":"Epub","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 4

Abstract

We consider a model-agnostic solution to the problem of Multi-Domain Learning (MDL) for multi-modal applications. Many existing MDL techniques are model-dependent solutions which explicitly require nontrivial architectural changes to construct domain-specific modules. Thus, properly applying these MDL techniques for new problems with well-established models, e.g. U-Net for semantic segmentation, may demand various low-level implementation efforts. In this paper, given emerging multi-modal data (e.g., various structural neuroimaging modalities), we aim to enable MDL purely algorithmically so that widely used neural networks can trivially achieve MDL in a model-independent manner. To this end, we consider a weighted loss function and extend it to an effective procedure by employing techniques from the recently active area of learning-to-learn (meta-learning). Specifically, we take inner-loop gradient steps to dynamically estimate posterior distributions over the hyperparameters of our loss function. Thus, our method is model-agnostic, requiring no additional model parameters and no network architecture changes; instead, only a few efficient algorithmic modifications are needed to improve performance in MDL. We demonstrate our solution to a fitting problem in medical imaging, specifically, in the automatic segmentation of white matter hyperintensity (WMH). We look at two neuroimaging modalities (T1-MR and FLAIR) with complementary information fitting for our problem.

查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
基于元学习的多领域学习:通过内循环学习在多领域损失格局中采取最优步骤。
针对多模态应用的多领域学习(MDL)问题,提出了一种与模型无关的解决方案。许多现有的MDL技术都是依赖于模型的解决方案,它们显式地要求进行重要的体系结构更改来构建特定于领域的模块。因此,适当地将这些MDL技术应用于具有成熟模型的新问题,例如用于语义分割的U-Net,可能需要各种底层实现工作。在本文中,考虑到新兴的多模态数据(例如,各种结构神经成像模式),我们的目标是使MDL纯粹通过算法实现,以便广泛使用的神经网络可以以模型独立的方式轻松实现MDL。为此,我们考虑一个加权损失函数,并通过采用最近活跃的学习-学习(元学习)领域的技术将其扩展为一个有效的过程。具体来说,我们采用内环梯度步骤来动态估计损失函数超参数上的后验分布。因此,我们的方法是模型无关的,不需要额外的模型参数,也不需要改变网络架构;相反,只需要一些有效的算法修改就可以提高MDL的性能。我们展示了我们的解决方案,以拟合问题在医学成像,特别是在自动分割白质高强度(WMH)。我们看两种神经成像模式(T1-MR和FLAIR)与互补的信息适合我们的问题。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Blood Harmonisation of Endoscopic Transsphenoidal Surgical Video Frames on Phantom Models. DART: DEFORMABLE ANATOMY-AWARE REGISTRATION TOOLKIT FOR LUNG CT REGISTRATION WITH KEYPOINTS SUPERVISION. ROBUST QUANTIFICATION OF PERCENT EMPHYSEMA ON CT VIA DOMAIN ATTENTION: THE MULTI-ETHNIC STUDY OF ATHEROSCLEROSIS (MESA) LUNG STUDY. SIFT-DBT: SELF-SUPERVISED INITIALIZATION AND FINE-TUNING FOR IMBALANCED DIGITAL BREAST TOMOSYNTHESIS IMAGE CLASSIFICATION. QUANTIFYING HIPPOCAMPAL SHAPE ASYMMETRY IN ALZHEIMER'S DISEASE USING OPTIMAL SHAPE CORRESPONDENCES.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1