Peitao Cheng , Xuanjiao Lei , Haoran Chen , Xiumei Wang
{"title":"DLE-YOLO: An efficient object detection algorithm with dual-branch lightweight excitation network","authors":"Peitao Cheng , Xuanjiao Lei , Haoran Chen , Xiumei Wang","doi":"10.1016/j.jiixd.2024.08.002","DOIUrl":null,"url":null,"abstract":"<div><div>As a computer vision task, object detection algorithms can be applied to various real-world scenarios. However, efficient algorithms often come with a large number of parameters and high computational complexity. To meet the demand for high-performance object detection algorithms on mobile devices and embedded devices with limited computational resources, we propose a new lightweight object detection algorithm called DLE-YOLO. Firstly, we design a novel backbone called dual-branch lightweight excitation network (DLEN) for feature extraction, which is mainly constructed by dual-branch lightweight excitation units (DLEU). DLEU is stacked with different numbers of dual-branch lightweight excitation blocks (DLEB), which can extract comprehensive features and integrate information between different channels of features. Secondly, in order to enhance the network to capture key feature information in the regions of interest, the attention model HS-coordinate attention (HS-CA) is introduced into the network. Thirdly, the localization loss utilizes SIoU loss to further optimize the accuracy of the bounding box. Our method achieves a mAP value of 46.0% on the MS-COCO dataset, which is a 2% mAP improvement compared to the baseline YOLOv5-m, while bringing a 19.3% reduction in parameter count and a 12.9% decrease in GFLOPs. Furthermore, our method outperforms some advanced lightweight object detection algorithms, validating the effectiveness of our approach.</div></div>","PeriodicalId":100790,"journal":{"name":"Journal of Information and Intelligence","volume":"3 2","pages":"Pages 91-102"},"PeriodicalIF":0.0000,"publicationDate":"2024-08-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Information and Intelligence","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2949715924000751","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
As a computer vision task, object detection algorithms can be applied to various real-world scenarios. However, efficient algorithms often come with a large number of parameters and high computational complexity. To meet the demand for high-performance object detection algorithms on mobile devices and embedded devices with limited computational resources, we propose a new lightweight object detection algorithm called DLE-YOLO. Firstly, we design a novel backbone called dual-branch lightweight excitation network (DLEN) for feature extraction, which is mainly constructed by dual-branch lightweight excitation units (DLEU). DLEU is stacked with different numbers of dual-branch lightweight excitation blocks (DLEB), which can extract comprehensive features and integrate information between different channels of features. Secondly, in order to enhance the network to capture key feature information in the regions of interest, the attention model HS-coordinate attention (HS-CA) is introduced into the network. Thirdly, the localization loss utilizes SIoU loss to further optimize the accuracy of the bounding box. Our method achieves a mAP value of 46.0% on the MS-COCO dataset, which is a 2% mAP improvement compared to the baseline YOLOv5-m, while bringing a 19.3% reduction in parameter count and a 12.9% decrease in GFLOPs. Furthermore, our method outperforms some advanced lightweight object detection algorithms, validating the effectiveness of our approach.