Transfer Learning With Singular Value Decomposition of Multichannel Convolution Matrices

IF 2.7 4区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Neural Computation Pub Date : 2023-09-08 DOI:10.1162/neco_a_01608
Tak Shing Au Yeung;Ka Chun Cheung;Michael K. Ng;Simon See;Andy Yip
{"title":"Transfer Learning With Singular Value Decomposition of Multichannel Convolution Matrices","authors":"Tak Shing Au Yeung;Ka Chun Cheung;Michael K. Ng;Simon See;Andy Yip","doi":"10.1162/neco_a_01608","DOIUrl":null,"url":null,"abstract":"The task of transfer learning using pretrained convolutional neural networks is considered. We propose a convolution-SVD layer to analyze the convolution operators with a singular value decomposition computed in the Fourier domain. Singular vectors extracted from the source domain are transferred to the target domain, whereas the singular values are fine-tuned with a target data set. In this way, dimension reduction is achieved to avoid overfitting, while some flexibility to fine-tune the convolution kernels is maintained. We extend an existing convolution kernel reconstruction algorithm to allow for a reconstruction from an arbitrary set of learned singular values. A generalization bound for a single convolution-SVD layer is devised to show the consistency between training and testing errors. We further introduce a notion of transfer learning gap. We prove that the testing error for a single convolution-SVD layer is bounded in terms of the gap, which motivates us to develop a regularization model with the gap as the regularizer. Numerical experiments are conducted to demonstrate the superiority of the proposed model in solving classification problems and the influence of various parameters. In particular, the regularization is shown to yield a significantly higher prediction accuracy.","PeriodicalId":54731,"journal":{"name":"Neural Computation","volume":null,"pages":null},"PeriodicalIF":2.7000,"publicationDate":"2023-09-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Neural Computation","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10302156/","RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

Abstract

The task of transfer learning using pretrained convolutional neural networks is considered. We propose a convolution-SVD layer to analyze the convolution operators with a singular value decomposition computed in the Fourier domain. Singular vectors extracted from the source domain are transferred to the target domain, whereas the singular values are fine-tuned with a target data set. In this way, dimension reduction is achieved to avoid overfitting, while some flexibility to fine-tune the convolution kernels is maintained. We extend an existing convolution kernel reconstruction algorithm to allow for a reconstruction from an arbitrary set of learned singular values. A generalization bound for a single convolution-SVD layer is devised to show the consistency between training and testing errors. We further introduce a notion of transfer learning gap. We prove that the testing error for a single convolution-SVD layer is bounded in terms of the gap, which motivates us to develop a regularization model with the gap as the regularizer. Numerical experiments are conducted to demonstrate the superiority of the proposed model in solving classification problems and the influence of various parameters. In particular, the regularization is shown to yield a significantly higher prediction accuracy.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
基于多通道卷积矩阵奇异值分解的迁移学习
考虑了使用预训练卷积神经网络进行迁移学习的任务。我们提出了一个卷积- svd层来分析在傅里叶域中计算奇异值分解的卷积算子。将源域提取的奇异向量转移到目标域,并根据目标数据集对奇异值进行微调。通过这种方式,实现了降维以避免过拟合,同时保持了微调卷积核的灵活性。我们扩展了现有的卷积核重构算法,以允许从任意一组学习到的奇异值进行重构。设计了单个卷积- svd层的泛化界,以显示训练误差和测试误差之间的一致性。我们进一步引入迁移学习差距的概念。我们证明了单个卷积- svd层的测试误差在间隙方面是有界的,这激励我们开发一个以间隙作为正则化器的正则化模型。通过数值实验验证了所提模型在解决分类问题和各种参数影响方面的优越性。特别是,正则化被证明可以产生显着更高的预测精度。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
Neural Computation
Neural Computation 工程技术-计算机:人工智能
CiteScore
6.30
自引率
3.40%
发文量
83
审稿时长
3.0 months
期刊介绍: Neural Computation is uniquely positioned at the crossroads between neuroscience and TMCS and welcomes the submission of original papers from all areas of TMCS, including: Advanced experimental design; Analysis of chemical sensor data; Connectomic reconstructions; Analysis of multielectrode and optical recordings; Genetic data for cell identity; Analysis of behavioral data; Multiscale models; Analysis of molecular mechanisms; Neuroinformatics; Analysis of brain imaging data; Neuromorphic engineering; Principles of neural coding, computation, circuit dynamics, and plasticity; Theories of brain function.
期刊最新文献
Associative Learning and Active Inference. Deep Nonnegative Matrix Factorization with Beta Divergences. KLIF: An Optimized Spiking Neuron Unit for Tuning Surrogate Gradient Function. ℓ 1 -Regularized ICA: A Novel Method for Analysis of Task-Related fMRI Data. Latent Space Bayesian Optimization With Latent Data Augmentation for Enhanced Exploration.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1