Rongchang Li , Tianyang Xu , Xiao-Jun Wu , Linze Li , Xiao Yang , Zhongwei Shen , Josef Kittler
{"title":"M-adapter: Multi-level image-to-video adaptation for video action recognition","authors":"Rongchang Li , Tianyang Xu , Xiao-Jun Wu , Linze Li , Xiao Yang , Zhongwei Shen , Josef Kittler","doi":"10.1016/j.cviu.2024.104150","DOIUrl":null,"url":null,"abstract":"<div><div>With the growing size of visual foundation models, training video models from scratch has become costly and challenging. Recent attempts focus on transferring frozen pre-trained Image Models (PIMs) to video fields by tuning inserted learnable parameters such as adapters and prompts. However, these methods require saving PIM activations for gradient calculations, leading to limited savings of GPU memory. In this paper, we propose a novel parallel branch that adapts the multi-level outputs of the frozen PIM for action recognition. It avoids passing gradients through the PIMs, thus naturally owning much lower GPU memory footprints. The proposed adaptation branch consists of hierarchically combined multi-level output adapters (M-adapters), comprising a fusion module and a temporal module. This design digests the existing discrepancies between the pre-training task and the target task with lower training costs. We show that when using larger models or on scenarios with higher demands for temporal modelling, the proposed method performs better than those with the full-parameter tuning manner. Finally, despite only tuning fewer parameters, our method achieves superior or comparable performance against current state-of-the-art methods.</div></div>","PeriodicalId":50633,"journal":{"name":"Computer Vision and Image Understanding","volume":null,"pages":null},"PeriodicalIF":4.3000,"publicationDate":"2024-09-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Computer Vision and Image Understanding","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1077314224002315","RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
Abstract
With the growing size of visual foundation models, training video models from scratch has become costly and challenging. Recent attempts focus on transferring frozen pre-trained Image Models (PIMs) to video fields by tuning inserted learnable parameters such as adapters and prompts. However, these methods require saving PIM activations for gradient calculations, leading to limited savings of GPU memory. In this paper, we propose a novel parallel branch that adapts the multi-level outputs of the frozen PIM for action recognition. It avoids passing gradients through the PIMs, thus naturally owning much lower GPU memory footprints. The proposed adaptation branch consists of hierarchically combined multi-level output adapters (M-adapters), comprising a fusion module and a temporal module. This design digests the existing discrepancies between the pre-training task and the target task with lower training costs. We show that when using larger models or on scenarios with higher demands for temporal modelling, the proposed method performs better than those with the full-parameter tuning manner. Finally, despite only tuning fewer parameters, our method achieves superior or comparable performance against current state-of-the-art methods.
期刊介绍:
The central focus of this journal is the computer analysis of pictorial information. Computer Vision and Image Understanding publishes papers covering all aspects of image analysis from the low-level, iconic processes of early vision to the high-level, symbolic processes of recognition and interpretation. A wide range of topics in the image understanding area is covered, including papers offering insights that differ from predominant views.
Research Areas Include:
• Theory
• Early vision
• Data structures and representations
• Shape
• Range
• Motion
• Matching and recognition
• Architecture and languages
• Vision systems