Robust Multimodal Learning with Missing Modalities via Parameter-Efficient Adaptation.

Md Kaykobad Reza, Ashley Prater-Bennette, M Salman Asif
{"title":"Robust Multimodal Learning with Missing Modalities via Parameter-Efficient Adaptation.","authors":"Md Kaykobad Reza, Ashley Prater-Bennette, M Salman Asif","doi":"10.1109/TPAMI.2024.3476487","DOIUrl":null,"url":null,"abstract":"<p><p>Multimodal learning seeks to utilize data from multiple sources to improve the overall performance of downstream tasks. It is desirable for redundancies in the data to make multimodal systems robust to missing or corrupted observations in some correlated modalities. However, we observe that the performance of several existing multimodal networks significantly deteriorates if one or multiple modalities are absent at test time. To enable robustness to missing modalities, we propose a simple and parameter-efficient adaptation procedure for pretrained multimodal networks. In particular, we exploit modulation of intermediate features to compensate for the missing modalities. We demonstrate that such adaptation can partially bridge performance drop due to missing modalities and outperform independent, dedicated networks trained for the available modality combinations in some cases. The proposed adaptation requires extremely small number of parameters (e.g., fewer than 1% of the total parameters) and applicable to a wide range of modality combinations and tasks. We conduct a series of experiments to highlight the missing modality robustness of our proposed method on five different multimodal tasks across seven datasets. Our proposed method demonstrates versatility across various tasks and datasets, and outperforms existing methods for robust multimodal learning with missing modalities.</p>","PeriodicalId":94034,"journal":{"name":"IEEE transactions on pattern analysis and machine intelligence","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2024-10-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE transactions on pattern analysis and machine intelligence","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/TPAMI.2024.3476487","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Multimodal learning seeks to utilize data from multiple sources to improve the overall performance of downstream tasks. It is desirable for redundancies in the data to make multimodal systems robust to missing or corrupted observations in some correlated modalities. However, we observe that the performance of several existing multimodal networks significantly deteriorates if one or multiple modalities are absent at test time. To enable robustness to missing modalities, we propose a simple and parameter-efficient adaptation procedure for pretrained multimodal networks. In particular, we exploit modulation of intermediate features to compensate for the missing modalities. We demonstrate that such adaptation can partially bridge performance drop due to missing modalities and outperform independent, dedicated networks trained for the available modality combinations in some cases. The proposed adaptation requires extremely small number of parameters (e.g., fewer than 1% of the total parameters) and applicable to a wide range of modality combinations and tasks. We conduct a series of experiments to highlight the missing modality robustness of our proposed method on five different multimodal tasks across seven datasets. Our proposed method demonstrates versatility across various tasks and datasets, and outperforms existing methods for robust multimodal learning with missing modalities.

查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
通过参数高效自适应进行有缺失模态的鲁棒多模态学习
多模态学习旨在利用多种来源的数据来提高下游任务的整体性能。我们希望数据中的冗余能使多模态系统在某些相关模态的观察结果缺失或损坏时保持稳健。然而,我们观察到,如果在测试时缺少一种或多种模态,现有的几种多模态网络的性能就会明显下降。为了实现对缺失模态的鲁棒性,我们为预训练的多模态网络提出了一种简单、参数效率高的适应程序。特别是,我们利用中间特征的调制来补偿缺失的模态。我们证明,这种适配可以部分弥补因模态缺失而导致的性能下降,在某些情况下,其性能优于针对现有模态组合训练的独立专用网络。所提出的适配只需要极少量的参数(例如,少于总参数的 1%),并且适用于多种模态组合和任务。我们进行了一系列实验,在七个数据集的五种不同多模态任务中强调了我们提出的方法对缺失模态的鲁棒性。我们提出的方法在各种任务和数据集上都表现出了多功能性,在缺失模态的鲁棒多模态学习方面优于现有方法。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Changen2: Multi-Temporal Remote Sensing Generative Change Foundation Model. Continuous-time Object Segmentation using High Temporal Resolution Event Camera. Dual-grained Lightweight Strategy. Fast Window-Based Event Denoising with Spatiotemporal Correlation Enhancement. Robust Multimodal Learning with Missing Modalities via Parameter-Efficient Adaptation.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1