{"title":"基于变压器的轻量级视觉问题解答网络与分权混合注意力","authors":"","doi":"10.1016/j.neucom.2024.128460","DOIUrl":null,"url":null,"abstract":"<div><p>Recent advances show that Transformer-based models and object detection-based models play an indispensable role in VQA. However, object detection-based models have significant limitations due to their redundant and complex detection box generation process. In contrast, Visual and Language Pre-training (VLP) models can achieve better performance, but require high computing power. To this end, we present Weight-Sharing Hybrid Attention Network (WHAN), a lightweight Transformer-based VQA model. In WHAN, we replace the object detection network with Transformer encoder and use LoRA to solve the problem that the language model cannot adapt to interrogative sentences. We propose Weight-Sharing Hybrid Attention (WHA) module with parallel residual adapters, which can significantly reduce the trainable parameters of the model and we design DWA and BVA modules that can allow the model to perform attention operations from different scales. Experiments on VQA-v2, COCO-QA, GQA, and CLEVR datasets show that WHAN achieves competitive performance with far fewer trainable parameters.</p></div>","PeriodicalId":19268,"journal":{"name":"Neurocomputing","volume":null,"pages":null},"PeriodicalIF":5.5000,"publicationDate":"2024-08-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"A lightweight Transformer-based visual question answering network with Weight-Sharing Hybrid Attention\",\"authors\":\"\",\"doi\":\"10.1016/j.neucom.2024.128460\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><p>Recent advances show that Transformer-based models and object detection-based models play an indispensable role in VQA. However, object detection-based models have significant limitations due to their redundant and complex detection box generation process. In contrast, Visual and Language Pre-training (VLP) models can achieve better performance, but require high computing power. To this end, we present Weight-Sharing Hybrid Attention Network (WHAN), a lightweight Transformer-based VQA model. In WHAN, we replace the object detection network with Transformer encoder and use LoRA to solve the problem that the language model cannot adapt to interrogative sentences. We propose Weight-Sharing Hybrid Attention (WHA) module with parallel residual adapters, which can significantly reduce the trainable parameters of the model and we design DWA and BVA modules that can allow the model to perform attention operations from different scales. Experiments on VQA-v2, COCO-QA, GQA, and CLEVR datasets show that WHAN achieves competitive performance with far fewer trainable parameters.</p></div>\",\"PeriodicalId\":19268,\"journal\":{\"name\":\"Neurocomputing\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":5.5000,\"publicationDate\":\"2024-08-23\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Neurocomputing\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S0925231224012311\",\"RegionNum\":2,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Neurocomputing","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0925231224012311","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
A lightweight Transformer-based visual question answering network with Weight-Sharing Hybrid Attention
Recent advances show that Transformer-based models and object detection-based models play an indispensable role in VQA. However, object detection-based models have significant limitations due to their redundant and complex detection box generation process. In contrast, Visual and Language Pre-training (VLP) models can achieve better performance, but require high computing power. To this end, we present Weight-Sharing Hybrid Attention Network (WHAN), a lightweight Transformer-based VQA model. In WHAN, we replace the object detection network with Transformer encoder and use LoRA to solve the problem that the language model cannot adapt to interrogative sentences. We propose Weight-Sharing Hybrid Attention (WHA) module with parallel residual adapters, which can significantly reduce the trainable parameters of the model and we design DWA and BVA modules that can allow the model to perform attention operations from different scales. Experiments on VQA-v2, COCO-QA, GQA, and CLEVR datasets show that WHAN achieves competitive performance with far fewer trainable parameters.
期刊介绍:
Neurocomputing publishes articles describing recent fundamental contributions in the field of neurocomputing. Neurocomputing theory, practice and applications are the essential topics being covered.