Xiangjin Zeng , Genghuan Liu , Jianming Chen , Xiaoyan Wu , Jianglei Di , Zhenbo Ren , Yuwen Qin
{"title":"在不利环境条件下通过协调注意力融合实现高效的多模态目标检测","authors":"Xiangjin Zeng , Genghuan Liu , Jianming Chen , Xiaoyan Wu , Jianglei Di , Zhenbo Ren , Yuwen Qin","doi":"10.1016/j.dsp.2024.104873","DOIUrl":null,"url":null,"abstract":"<div><div>Integrating complementary visual information from multimodal image pairs can significantly improve the robustness and accuracy of object detection algorithms, particularly in challenging environments. However, a key challenge lies in the effective fusion of modality-specific features within these algorithms. To address this, we propose a novel lightweight fusion module, termed the Coordinate Attention Fusion (CAF) module, built on the YOLOv5 object detection framework. The CAF module exploits differential amplification and coordinated attention mechanisms to selectively enhance distinctive cross-modal features, thereby preserving critical modality-specific information. To further optimize performance and reduce computational overhead, the two-stream backbone network has been refined, reducing the model's parameter count without compromising accuracy. Comprehensive experiments conducted on two benchmark multimodal datasets demonstrate that the proposed approach consistently surpasses conventional methods and outperforms existing state-of-the-art multimodal object detection algorithms. These findings underscore the potential of cross-modality fusion as a promising direction for improving object detection in adverse conditions.</div></div>","PeriodicalId":51011,"journal":{"name":"Digital Signal Processing","volume":"156 ","pages":"Article 104873"},"PeriodicalIF":2.9000,"publicationDate":"2024-11-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Efficient multimodal object detection via coordinate attention fusion for adverse environmental conditions\",\"authors\":\"Xiangjin Zeng , Genghuan Liu , Jianming Chen , Xiaoyan Wu , Jianglei Di , Zhenbo Ren , Yuwen Qin\",\"doi\":\"10.1016/j.dsp.2024.104873\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>Integrating complementary visual information from multimodal image pairs can significantly improve the robustness and accuracy of object detection algorithms, particularly in challenging environments. However, a key challenge lies in the effective fusion of modality-specific features within these algorithms. To address this, we propose a novel lightweight fusion module, termed the Coordinate Attention Fusion (CAF) module, built on the YOLOv5 object detection framework. The CAF module exploits differential amplification and coordinated attention mechanisms to selectively enhance distinctive cross-modal features, thereby preserving critical modality-specific information. To further optimize performance and reduce computational overhead, the two-stream backbone network has been refined, reducing the model's parameter count without compromising accuracy. Comprehensive experiments conducted on two benchmark multimodal datasets demonstrate that the proposed approach consistently surpasses conventional methods and outperforms existing state-of-the-art multimodal object detection algorithms. These findings underscore the potential of cross-modality fusion as a promising direction for improving object detection in adverse conditions.</div></div>\",\"PeriodicalId\":51011,\"journal\":{\"name\":\"Digital Signal Processing\",\"volume\":\"156 \",\"pages\":\"Article 104873\"},\"PeriodicalIF\":2.9000,\"publicationDate\":\"2024-11-15\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Digital Signal Processing\",\"FirstCategoryId\":\"5\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S1051200424004974\",\"RegionNum\":3,\"RegionCategory\":\"工程技术\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"ENGINEERING, ELECTRICAL & ELECTRONIC\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Digital Signal Processing","FirstCategoryId":"5","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1051200424004974","RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"ENGINEERING, ELECTRICAL & ELECTRONIC","Score":null,"Total":0}
Efficient multimodal object detection via coordinate attention fusion for adverse environmental conditions
Integrating complementary visual information from multimodal image pairs can significantly improve the robustness and accuracy of object detection algorithms, particularly in challenging environments. However, a key challenge lies in the effective fusion of modality-specific features within these algorithms. To address this, we propose a novel lightweight fusion module, termed the Coordinate Attention Fusion (CAF) module, built on the YOLOv5 object detection framework. The CAF module exploits differential amplification and coordinated attention mechanisms to selectively enhance distinctive cross-modal features, thereby preserving critical modality-specific information. To further optimize performance and reduce computational overhead, the two-stream backbone network has been refined, reducing the model's parameter count without compromising accuracy. Comprehensive experiments conducted on two benchmark multimodal datasets demonstrate that the proposed approach consistently surpasses conventional methods and outperforms existing state-of-the-art multimodal object detection algorithms. These findings underscore the potential of cross-modality fusion as a promising direction for improving object detection in adverse conditions.
期刊介绍:
Digital Signal Processing: A Review Journal is one of the oldest and most established journals in the field of signal processing yet it aims to be the most innovative. The Journal invites top quality research articles at the frontiers of research in all aspects of signal processing. Our objective is to provide a platform for the publication of ground-breaking research in signal processing with both academic and industrial appeal.
The journal has a special emphasis on statistical signal processing methodology such as Bayesian signal processing, and encourages articles on emerging applications of signal processing such as:
• big data• machine learning• internet of things• information security• systems biology and computational biology,• financial time series analysis,• autonomous vehicles,• quantum computing,• neuromorphic engineering,• human-computer interaction and intelligent user interfaces,• environmental signal processing,• geophysical signal processing including seismic signal processing,• chemioinformatics and bioinformatics,• audio, visual and performance arts,• disaster management and prevention,• renewable energy,