HAda:多视图卷积神经网络的超自适应参数高效学习

Shiye Wang;Changsheng Li;Zeyu Yan;Wanjun Liang;Ye Yuan;Guoren Wang
{"title":"HAda:多视图卷积神经网络的超自适应参数高效学习","authors":"Shiye Wang;Changsheng Li;Zeyu Yan;Wanjun Liang;Ye Yuan;Guoren Wang","doi":"10.1109/TIP.2024.3504252","DOIUrl":null,"url":null,"abstract":"Recent years have witnessed a great success of multi-view learning empowered by deep ConvNets, leveraging a large number of network parameters. Nevertheless, there is an ongoing consideration regarding the essentiality of all these parameters in multi-view ConvNets. As we know, hypernetworks offer a promising solution to reduce the number of parameters by learning a concise network to generate weights for the larger target network, illustrating the presence of redundant information within network parameters. However, how to leverage hypernetworks for learning parameter-efficient multi-view ConvNets remains underexplored. In this paper, we present a lightweight multi-layer shared Hyper-Adaptive network (HAda), aiming to simultaneously generate adaptive weights for different views and convolutional layers of deep multi-view ConvNets. The adaptability inherent in HAda not only contributes to a substantial reduction in parameter redundancy but also enables the modeling of intricate view-aware and layer-wise information. This capability ensures the maintenance of high performance, ultimately achieving parameter-efficient learning. Specifically, we design a multi-view shared module in HAda to capture information common across views. This module incorporates a shared global gated interpolation strategy, which generates layer-wise gating factors. These factors facilitate adaptive interpolation of global contextual information into the weights. Meanwhile, we put forward a tailored weight-calibrated adapter for each view that facilitates the conveyance of view-specific information. These adapters generate view-adaptive weight scaling calibrators, allowing the selective emphasis of personalized information for each view without introducing excessive parameters. Extensive experiments on six publicly available datasets demonstrate the effectiveness of the proposed method. In particular, HAda can serve as a flexible plug-in strategy to work well with existing multi-view methods for both image classification and image clustering tasks.","PeriodicalId":94032,"journal":{"name":"IEEE transactions on image processing : a publication of the IEEE Signal Processing Society","volume":"34 ","pages":"85-99"},"PeriodicalIF":0.0000,"publicationDate":"2024-11-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"HAda: Hyper-Adaptive Parameter-Efficient Learning for Multi-View ConvNets\",\"authors\":\"Shiye Wang;Changsheng Li;Zeyu Yan;Wanjun Liang;Ye Yuan;Guoren Wang\",\"doi\":\"10.1109/TIP.2024.3504252\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Recent years have witnessed a great success of multi-view learning empowered by deep ConvNets, leveraging a large number of network parameters. Nevertheless, there is an ongoing consideration regarding the essentiality of all these parameters in multi-view ConvNets. As we know, hypernetworks offer a promising solution to reduce the number of parameters by learning a concise network to generate weights for the larger target network, illustrating the presence of redundant information within network parameters. However, how to leverage hypernetworks for learning parameter-efficient multi-view ConvNets remains underexplored. In this paper, we present a lightweight multi-layer shared Hyper-Adaptive network (HAda), aiming to simultaneously generate adaptive weights for different views and convolutional layers of deep multi-view ConvNets. The adaptability inherent in HAda not only contributes to a substantial reduction in parameter redundancy but also enables the modeling of intricate view-aware and layer-wise information. This capability ensures the maintenance of high performance, ultimately achieving parameter-efficient learning. Specifically, we design a multi-view shared module in HAda to capture information common across views. This module incorporates a shared global gated interpolation strategy, which generates layer-wise gating factors. These factors facilitate adaptive interpolation of global contextual information into the weights. Meanwhile, we put forward a tailored weight-calibrated adapter for each view that facilitates the conveyance of view-specific information. These adapters generate view-adaptive weight scaling calibrators, allowing the selective emphasis of personalized information for each view without introducing excessive parameters. Extensive experiments on six publicly available datasets demonstrate the effectiveness of the proposed method. In particular, HAda can serve as a flexible plug-in strategy to work well with existing multi-view methods for both image classification and image clustering tasks.\",\"PeriodicalId\":94032,\"journal\":{\"name\":\"IEEE transactions on image processing : a publication of the IEEE Signal Processing Society\",\"volume\":\"34 \",\"pages\":\"85-99\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-11-27\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE transactions on image processing : a publication of the IEEE Signal Processing Society\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://ieeexplore.ieee.org/document/10770155/\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE transactions on image processing : a publication of the IEEE Signal Processing Society","FirstCategoryId":"1085","ListUrlMain":"https://ieeexplore.ieee.org/document/10770155/","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

近年来,利用大量网络参数的深度卷积神经网络支持的多视图学习取得了巨大成功。然而,关于多视图卷积神经网络中所有这些参数的重要性,人们一直在考虑。正如我们所知,超网络提供了一个很有前途的解决方案,通过学习一个简洁的网络来为更大的目标网络生成权值,从而减少参数的数量,这说明了网络参数中存在冗余信息。然而,如何利用超网络来学习参数高效的多视图卷积神经网络仍未得到充分探索。本文提出了一种轻量级的多层共享超自适应网络(HAda),旨在同时为深度多视图卷积神经网络的不同视图和卷积层生成自适应权重。HAda固有的适应性不仅有助于大幅减少参数冗余,而且还可以对复杂的视图感知和分层信息进行建模。这种能力确保了高性能的维护,最终实现了参数高效学习。具体来说,我们在HAda中设计了一个多视图共享模块来捕获视图之间的公共信息。该模块采用共享的全局门控插值策略,生成分层门控因子。这些因素有助于将全局上下文信息自适应地插值到权重中。同时,我们为每个视图提供了量身定制的权重校准适配器,以方便视图特定信息的传输。这些适配器生成自适应视图的权重缩放校准器,允许在不引入过多参数的情况下为每个视图选择性地强调个性化信息。在六个公开可用的数据集上进行的大量实验证明了所提出方法的有效性。特别是,HAda可以作为一种灵活的插件策略,与现有的多视图方法一起很好地用于图像分类和图像聚类任务。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
HAda: Hyper-Adaptive Parameter-Efficient Learning for Multi-View ConvNets
Recent years have witnessed a great success of multi-view learning empowered by deep ConvNets, leveraging a large number of network parameters. Nevertheless, there is an ongoing consideration regarding the essentiality of all these parameters in multi-view ConvNets. As we know, hypernetworks offer a promising solution to reduce the number of parameters by learning a concise network to generate weights for the larger target network, illustrating the presence of redundant information within network parameters. However, how to leverage hypernetworks for learning parameter-efficient multi-view ConvNets remains underexplored. In this paper, we present a lightweight multi-layer shared Hyper-Adaptive network (HAda), aiming to simultaneously generate adaptive weights for different views and convolutional layers of deep multi-view ConvNets. The adaptability inherent in HAda not only contributes to a substantial reduction in parameter redundancy but also enables the modeling of intricate view-aware and layer-wise information. This capability ensures the maintenance of high performance, ultimately achieving parameter-efficient learning. Specifically, we design a multi-view shared module in HAda to capture information common across views. This module incorporates a shared global gated interpolation strategy, which generates layer-wise gating factors. These factors facilitate adaptive interpolation of global contextual information into the weights. Meanwhile, we put forward a tailored weight-calibrated adapter for each view that facilitates the conveyance of view-specific information. These adapters generate view-adaptive weight scaling calibrators, allowing the selective emphasis of personalized information for each view without introducing excessive parameters. Extensive experiments on six publicly available datasets demonstrate the effectiveness of the proposed method. In particular, HAda can serve as a flexible plug-in strategy to work well with existing multi-view methods for both image classification and image clustering tasks.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Enhancing Text-Video Retrieval Performance With Low-Salient but Discriminative Objects Breaking Boundaries: Unifying Imaging and Compression for HDR Image Compression A Pyramid Fusion MLP for Dense Prediction IFENet: Interaction, Fusion, and Enhancement Network for V-D-T Salient Object Detection NeuralDiffuser: Neuroscience-Inspired Diffusion Guidance for fMRI Visual Reconstruction
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1