Xingzhu Liang , Mengyuan Li , Yu-e Lin , Xianjin Fang
{"title":"GACFNet: A global attention cross-level feature fusion network for aerial image object detection","authors":"Xingzhu Liang , Mengyuan Li , Yu-e Lin , Xianjin Fang","doi":"10.1016/j.compeleceng.2024.110042","DOIUrl":null,"url":null,"abstract":"<div><div>Real-time object detection in aerial images is challenging, primarily due to small and densely packed objects, accompanied by significant scale variations. Previous methods have addressed these issues by employing fusion structures similar to feature pyramid networks. However, these fusion structures overlook the complementary relationship between feature information from non-adjacent layers. To tackle this, we propose a global attention cross-layer feature fusion network (GACFNet). Firstly, we design a global attention cross-layer feature fusion (GACF) module, which obtains global information by fusing features at different scales, using the attention mechanism to highlight foreground information in the global feature map. Additionally, we connect the global attention feature map with other layers to establish correlations between non-adjacent layers. Secondly, a large-kernel separable pooling pyramid fusion (LKSPPF) module is proposed to capture a wider receptive field and enhance context information. Thirdly, to better preserve small object information in low-resolution feature maps, we improve the cross-stage partial fusion module (C2f) of the baseline using a deformable convolution technique (DCNv2). Finally, we design a hybrid regression function (NGIoU loss) to improve object localization and sample allocation in aerial images while accelerating model convergence. Extensive experiments were conducted on three publicly available aerial image datasets. The experimental results show that the method significantly improves the accuracy of object detection in aerial images. The average precision (<span><math><mrow><mi>A</mi><msub><mrow><mi>P</mi></mrow><mrow><mn>50</mn></mrow></msub></mrow></math></span>) of the three datasets reaches 52.7%, 81.8%, and 33.0%, respectively, while a real-time performance of 69.9 frames per second is achieved. The code will be available online <span><span>https://github.com/JSJ515-Group/GACFNet/</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":50630,"journal":{"name":"Computers & Electrical Engineering","volume":"123 ","pages":"Article 110042"},"PeriodicalIF":4.0000,"publicationDate":"2025-01-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Computers & Electrical Engineering","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0045790624009674","RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, HARDWARE & ARCHITECTURE","Score":null,"Total":0}
引用次数: 0
Abstract
Real-time object detection in aerial images is challenging, primarily due to small and densely packed objects, accompanied by significant scale variations. Previous methods have addressed these issues by employing fusion structures similar to feature pyramid networks. However, these fusion structures overlook the complementary relationship between feature information from non-adjacent layers. To tackle this, we propose a global attention cross-layer feature fusion network (GACFNet). Firstly, we design a global attention cross-layer feature fusion (GACF) module, which obtains global information by fusing features at different scales, using the attention mechanism to highlight foreground information in the global feature map. Additionally, we connect the global attention feature map with other layers to establish correlations between non-adjacent layers. Secondly, a large-kernel separable pooling pyramid fusion (LKSPPF) module is proposed to capture a wider receptive field and enhance context information. Thirdly, to better preserve small object information in low-resolution feature maps, we improve the cross-stage partial fusion module (C2f) of the baseline using a deformable convolution technique (DCNv2). Finally, we design a hybrid regression function (NGIoU loss) to improve object localization and sample allocation in aerial images while accelerating model convergence. Extensive experiments were conducted on three publicly available aerial image datasets. The experimental results show that the method significantly improves the accuracy of object detection in aerial images. The average precision () of the three datasets reaches 52.7%, 81.8%, and 33.0%, respectively, while a real-time performance of 69.9 frames per second is achieved. The code will be available online https://github.com/JSJ515-Group/GACFNet/.
期刊介绍:
The impact of computers has nowhere been more revolutionary than in electrical engineering. The design, analysis, and operation of electrical and electronic systems are now dominated by computers, a transformation that has been motivated by the natural ease of interface between computers and electrical systems, and the promise of spectacular improvements in speed and efficiency.
Published since 1973, Computers & Electrical Engineering provides rapid publication of topical research into the integration of computer technology and computational techniques with electrical and electronic systems. The journal publishes papers featuring novel implementations of computers and computational techniques in areas like signal and image processing, high-performance computing, parallel processing, and communications. Special attention will be paid to papers describing innovative architectures, algorithms, and software tools.