Chenhuan Tang, Shiran Zhu, Meng Zhang, Jie Chen, Xingyi Guo
{"title":"Mask detection algorithm based on the improved YOLOv4 - tiny","authors":"Chenhuan Tang, Shiran Zhu, Meng Zhang, Jie Chen, Xingyi Guo","doi":"10.1117/12.2671703","DOIUrl":null,"url":null,"abstract":"Based on YOLOv4-tiny, A lightweight mask detection algorithm is presented. By replacing the CBL module in the backbone feature extraction network (CSPdarknet-tiny) and Yolo Head with Ghost module that reduces the parameters of the network model. By the combination of Ghost module, CBAM attention, SMU activation function, and BN layer, a lightweight attention mechanism residual module (GCS_Block) is designed, which is embedded into the backbone feature extraction network, improving the model extract mask feature level. The Kmeans++ method is used to perform anchor box clustering on the dataset in this thesis. The experimental results show that compared with YOLOv4-tiny, the MAP has increased from 74.02% to 86.77%, the parameter has decreased from 6,056,606 to 1,657,828. The memory size of the model is 5.6MB.","PeriodicalId":227528,"journal":{"name":"International Conference on Artificial Intelligence and Computer Engineering (ICAICE 2022)","volume":"39 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-04-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"International Conference on Artificial Intelligence and Computer Engineering (ICAICE 2022)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1117/12.2671703","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Based on YOLOv4-tiny, A lightweight mask detection algorithm is presented. By replacing the CBL module in the backbone feature extraction network (CSPdarknet-tiny) and Yolo Head with Ghost module that reduces the parameters of the network model. By the combination of Ghost module, CBAM attention, SMU activation function, and BN layer, a lightweight attention mechanism residual module (GCS_Block) is designed, which is embedded into the backbone feature extraction network, improving the model extract mask feature level. The Kmeans++ method is used to perform anchor box clustering on the dataset in this thesis. The experimental results show that compared with YOLOv4-tiny, the MAP has increased from 74.02% to 86.77%, the parameter has decreased from 6,056,606 to 1,657,828. The memory size of the model is 5.6MB.