{"title":"GLIC:基于全局-局部信息耦合和多尺度特征融合的水下目标探测","authors":"Huipu Xu , Meixiang Zhang , Yongzhi Li","doi":"10.1016/j.jvcir.2024.104330","DOIUrl":null,"url":null,"abstract":"<div><div>With the rapid development of object detection technology, underwater object detection has attracted widespread attention. Most of the existing underwater target detection methods are built based on convolutional neural networks (CNNs), which still have some limitations in the utilization of global information and cannot fully capture the key information in the images. To overcome the challenge of insufficient global–local feature extraction, an underwater target detector (namely GLIC) based on global–local information coupling and multi-scale feature fusion is proposed in this paper. Our GLIC consists of three main components: spatial pyramid pooling, global–local information coupling, and multi-scale feature fusion. Firstly, we embed spatial pyramid pooling, which improves the robustness of the model while retaining more spatial information. Secondly, we design the feature pyramid network with global–local information coupling. The global context of the transformer branch and the local features of the CNN branch interact with each other to enhance the feature representation. Finally, we construct a Multi-scale Feature Fusion (MFF) module that utilizes balanced semantic features integrated at the same depth for multi-scale feature fusion. In this way, each resolution in the pyramid receives equal information from others, thus balancing the information flow and making the features more discriminative. As demonstrated in comprehensive experiments, our GLIC, respectively, achieves 88.46%, 87.51%, and 74.94% mAP on the URPC2019, URPC2020, and UDD datasets.</div></div>","PeriodicalId":54755,"journal":{"name":"Journal of Visual Communication and Image Representation","volume":"105 ","pages":"Article 104330"},"PeriodicalIF":2.6000,"publicationDate":"2024-10-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"GLIC: Underwater target detection based on global–local information coupling and multi-scale feature fusion\",\"authors\":\"Huipu Xu , Meixiang Zhang , Yongzhi Li\",\"doi\":\"10.1016/j.jvcir.2024.104330\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>With the rapid development of object detection technology, underwater object detection has attracted widespread attention. Most of the existing underwater target detection methods are built based on convolutional neural networks (CNNs), which still have some limitations in the utilization of global information and cannot fully capture the key information in the images. To overcome the challenge of insufficient global–local feature extraction, an underwater target detector (namely GLIC) based on global–local information coupling and multi-scale feature fusion is proposed in this paper. Our GLIC consists of three main components: spatial pyramid pooling, global–local information coupling, and multi-scale feature fusion. Firstly, we embed spatial pyramid pooling, which improves the robustness of the model while retaining more spatial information. Secondly, we design the feature pyramid network with global–local information coupling. The global context of the transformer branch and the local features of the CNN branch interact with each other to enhance the feature representation. Finally, we construct a Multi-scale Feature Fusion (MFF) module that utilizes balanced semantic features integrated at the same depth for multi-scale feature fusion. In this way, each resolution in the pyramid receives equal information from others, thus balancing the information flow and making the features more discriminative. As demonstrated in comprehensive experiments, our GLIC, respectively, achieves 88.46%, 87.51%, and 74.94% mAP on the URPC2019, URPC2020, and UDD datasets.</div></div>\",\"PeriodicalId\":54755,\"journal\":{\"name\":\"Journal of Visual Communication and Image Representation\",\"volume\":\"105 \",\"pages\":\"Article 104330\"},\"PeriodicalIF\":2.6000,\"publicationDate\":\"2024-10-26\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Journal of Visual Communication and Image Representation\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S1047320324002864\",\"RegionNum\":4,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"COMPUTER SCIENCE, INFORMATION SYSTEMS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Visual Communication and Image Representation","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1047320324002864","RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, INFORMATION SYSTEMS","Score":null,"Total":0}
GLIC: Underwater target detection based on global–local information coupling and multi-scale feature fusion
With the rapid development of object detection technology, underwater object detection has attracted widespread attention. Most of the existing underwater target detection methods are built based on convolutional neural networks (CNNs), which still have some limitations in the utilization of global information and cannot fully capture the key information in the images. To overcome the challenge of insufficient global–local feature extraction, an underwater target detector (namely GLIC) based on global–local information coupling and multi-scale feature fusion is proposed in this paper. Our GLIC consists of three main components: spatial pyramid pooling, global–local information coupling, and multi-scale feature fusion. Firstly, we embed spatial pyramid pooling, which improves the robustness of the model while retaining more spatial information. Secondly, we design the feature pyramid network with global–local information coupling. The global context of the transformer branch and the local features of the CNN branch interact with each other to enhance the feature representation. Finally, we construct a Multi-scale Feature Fusion (MFF) module that utilizes balanced semantic features integrated at the same depth for multi-scale feature fusion. In this way, each resolution in the pyramid receives equal information from others, thus balancing the information flow and making the features more discriminative. As demonstrated in comprehensive experiments, our GLIC, respectively, achieves 88.46%, 87.51%, and 74.94% mAP on the URPC2019, URPC2020, and UDD datasets.
期刊介绍:
The Journal of Visual Communication and Image Representation publishes papers on state-of-the-art visual communication and image representation, with emphasis on novel technologies and theoretical work in this multidisciplinary area of pure and applied research. The field of visual communication and image representation is considered in its broadest sense and covers both digital and analog aspects as well as processing and communication in biological visual systems.