{"title":"TAG-fusion:用于语义分割的两阶段注意力引导多模态融合网络","authors":"Zhizhou Zhang, Wenwu Wang, Lei Zhu, Zhibin Tang","doi":"10.1016/j.dsp.2024.104807","DOIUrl":null,"url":null,"abstract":"<div><div>In the current research, leveraging auxiliary modalities, such as depth information or point cloud information, to improve RGB semantic segmentation has shown significant potential. However, existing methods mainly use convolutional modules for aggregating features from auxiliary modalities, thereby lacking sufficient exploitation of long-range dependencies. Moreover, fusion strategies are typically limited to singular approaches. In this paper, we propose a transformer-based multimodal fusion framework to better utilize auxiliary modalities for enhancing semantic segmentation results. Specifically, we employ a dual-stream architecture for extracting features from RGB and auxiliary modalities, respectively. We incorporate both early fusion and deep feature fusion techniques. At each layer, we introduce mixed attention mechanisms to leverage features from other modalities, guiding and enhancing the current modality's features before propagating them to the subsequent stage of feature extraction. After the extraction of features from different modalities, we employ an enhanced cross-attention mechanism for feature interaction, followed by channel fusion to obtain the final semantic features. Subsequently, we provide separate supervision to the network on the RGB stream, auxiliary stream, and fusion stream to facilitate the learning of representations for different modalities. The experimental results demonstrate that our framework exhibits superior performance across diverse modalities. Specifically, our approach achieves state-of-the-art results on the NYU Depth V2, SUN-RGBD, DELIVER and MFNet datasets.</div></div>","PeriodicalId":51011,"journal":{"name":"Digital Signal Processing","volume":"156 ","pages":"Article 104807"},"PeriodicalIF":2.9000,"publicationDate":"2024-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"TAG-fusion: Two-stage attention guided multi-modal fusion network for semantic segmentation\",\"authors\":\"Zhizhou Zhang, Wenwu Wang, Lei Zhu, Zhibin Tang\",\"doi\":\"10.1016/j.dsp.2024.104807\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>In the current research, leveraging auxiliary modalities, such as depth information or point cloud information, to improve RGB semantic segmentation has shown significant potential. However, existing methods mainly use convolutional modules for aggregating features from auxiliary modalities, thereby lacking sufficient exploitation of long-range dependencies. Moreover, fusion strategies are typically limited to singular approaches. In this paper, we propose a transformer-based multimodal fusion framework to better utilize auxiliary modalities for enhancing semantic segmentation results. Specifically, we employ a dual-stream architecture for extracting features from RGB and auxiliary modalities, respectively. We incorporate both early fusion and deep feature fusion techniques. At each layer, we introduce mixed attention mechanisms to leverage features from other modalities, guiding and enhancing the current modality's features before propagating them to the subsequent stage of feature extraction. After the extraction of features from different modalities, we employ an enhanced cross-attention mechanism for feature interaction, followed by channel fusion to obtain the final semantic features. Subsequently, we provide separate supervision to the network on the RGB stream, auxiliary stream, and fusion stream to facilitate the learning of representations for different modalities. The experimental results demonstrate that our framework exhibits superior performance across diverse modalities. Specifically, our approach achieves state-of-the-art results on the NYU Depth V2, SUN-RGBD, DELIVER and MFNet datasets.</div></div>\",\"PeriodicalId\":51011,\"journal\":{\"name\":\"Digital Signal Processing\",\"volume\":\"156 \",\"pages\":\"Article 104807\"},\"PeriodicalIF\":2.9000,\"publicationDate\":\"2024-10-11\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Digital Signal Processing\",\"FirstCategoryId\":\"5\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S1051200424004329\",\"RegionNum\":3,\"RegionCategory\":\"工程技术\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"ENGINEERING, ELECTRICAL & ELECTRONIC\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Digital Signal Processing","FirstCategoryId":"5","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1051200424004329","RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"ENGINEERING, ELECTRICAL & ELECTRONIC","Score":null,"Total":0}
TAG-fusion: Two-stage attention guided multi-modal fusion network for semantic segmentation
In the current research, leveraging auxiliary modalities, such as depth information or point cloud information, to improve RGB semantic segmentation has shown significant potential. However, existing methods mainly use convolutional modules for aggregating features from auxiliary modalities, thereby lacking sufficient exploitation of long-range dependencies. Moreover, fusion strategies are typically limited to singular approaches. In this paper, we propose a transformer-based multimodal fusion framework to better utilize auxiliary modalities for enhancing semantic segmentation results. Specifically, we employ a dual-stream architecture for extracting features from RGB and auxiliary modalities, respectively. We incorporate both early fusion and deep feature fusion techniques. At each layer, we introduce mixed attention mechanisms to leverage features from other modalities, guiding and enhancing the current modality's features before propagating them to the subsequent stage of feature extraction. After the extraction of features from different modalities, we employ an enhanced cross-attention mechanism for feature interaction, followed by channel fusion to obtain the final semantic features. Subsequently, we provide separate supervision to the network on the RGB stream, auxiliary stream, and fusion stream to facilitate the learning of representations for different modalities. The experimental results demonstrate that our framework exhibits superior performance across diverse modalities. Specifically, our approach achieves state-of-the-art results on the NYU Depth V2, SUN-RGBD, DELIVER and MFNet datasets.
期刊介绍:
Digital Signal Processing: A Review Journal is one of the oldest and most established journals in the field of signal processing yet it aims to be the most innovative. The Journal invites top quality research articles at the frontiers of research in all aspects of signal processing. Our objective is to provide a platform for the publication of ground-breaking research in signal processing with both academic and industrial appeal.
The journal has a special emphasis on statistical signal processing methodology such as Bayesian signal processing, and encourages articles on emerging applications of signal processing such as:
• big data• machine learning• internet of things• information security• systems biology and computational biology,• financial time series analysis,• autonomous vehicles,• quantum computing,• neuromorphic engineering,• human-computer interaction and intelligent user interfaces,• environmental signal processing,• geophysical signal processing including seismic signal processing,• chemioinformatics and bioinformatics,• audio, visual and performance arts,• disaster management and prevention,• renewable energy,