J. Fang, Xiang Lin, Fengxiang Zhou, Yan Tian, Min Zhang
{"title":"基于优化YOLOv5的安全帽检测","authors":"J. Fang, Xiang Lin, Fengxiang Zhou, Yan Tian, Min Zhang","doi":"10.1109/PHM58589.2023.00030","DOIUrl":null,"url":null,"abstract":"Whether employees wear safety helmets is an important safety issue in power related work scenarios, and various safety issues can be avoided by monitoring this situation. However, traditional target detection methods are vulnerable to interference due to the weather, light, personnel density, location of surveillance cameras and other problems in the working environment, and the recognition and detection effect of such small targets is not very good. Therefore, this paper uses the high-precision YOLOv5 (You Only Look Once) as the target detection framework, and modifies its backbone network to improve its ability in small target recognition. The original backbone structure is cut and compressed, and the SwinT (Swin Transformer) modules are added to improve the overall recognition accuracy based on its powerful small target recognition ability. At the same time, SE (Squeeze and Excitation) and CBAM (Convolutional Block Attention Module) modules are added to further improve the recognition accuracy of the entire network. Finally, experiments are conducted on the SHWD (Safety Helmet Wearing Dataset) dataset. The experimental results show that compared to the network before modification, the accuracy of the optimized YOLO structure proposed in this paper is significantly improved on the validation dataset, with an average recognition accuracy of 93%.","PeriodicalId":196601,"journal":{"name":"2023 Prognostics and Health Management Conference (PHM)","volume":"37 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Safety Helmet Detection Based on Optimized YOLOv5\",\"authors\":\"J. Fang, Xiang Lin, Fengxiang Zhou, Yan Tian, Min Zhang\",\"doi\":\"10.1109/PHM58589.2023.00030\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Whether employees wear safety helmets is an important safety issue in power related work scenarios, and various safety issues can be avoided by monitoring this situation. However, traditional target detection methods are vulnerable to interference due to the weather, light, personnel density, location of surveillance cameras and other problems in the working environment, and the recognition and detection effect of such small targets is not very good. Therefore, this paper uses the high-precision YOLOv5 (You Only Look Once) as the target detection framework, and modifies its backbone network to improve its ability in small target recognition. The original backbone structure is cut and compressed, and the SwinT (Swin Transformer) modules are added to improve the overall recognition accuracy based on its powerful small target recognition ability. At the same time, SE (Squeeze and Excitation) and CBAM (Convolutional Block Attention Module) modules are added to further improve the recognition accuracy of the entire network. Finally, experiments are conducted on the SHWD (Safety Helmet Wearing Dataset) dataset. The experimental results show that compared to the network before modification, the accuracy of the optimized YOLO structure proposed in this paper is significantly improved on the validation dataset, with an average recognition accuracy of 93%.\",\"PeriodicalId\":196601,\"journal\":{\"name\":\"2023 Prognostics and Health Management Conference (PHM)\",\"volume\":\"37 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2023-05-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2023 Prognostics and Health Management Conference (PHM)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/PHM58589.2023.00030\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2023 Prognostics and Health Management Conference (PHM)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/PHM58589.2023.00030","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
摘要
员工是否戴安全帽是电力相关工作场景中一个重要的安全问题,通过监控这种情况可以避免各种安全问题。但是,传统的目标检测方法在工作环境中容易受到天气、光线、人员密度、监控摄像机位置等问题的干扰,对这类小目标的识别和检测效果不是很好。因此,本文采用高精度的YOLOv5 (You Only Look Once)作为目标检测框架,并对其骨干网进行修改,提高其对小目标的识别能力。对原有主干结构进行剪切压缩,并在其强大的小目标识别能力的基础上加入SwinT (Swin Transformer)模块,提高整体识别精度。同时,增加了SE (Squeeze and Excitation)和CBAM (Convolutional Block Attention Module)模块,进一步提高了整个网络的识别精度。最后,在SHWD (Safety Helmet Wearing Dataset)数据集上进行了实验。实验结果表明,与修改前的网络相比,本文提出的优化YOLO结构在验证数据集上的准确率显著提高,平均识别准确率达到93%。
Whether employees wear safety helmets is an important safety issue in power related work scenarios, and various safety issues can be avoided by monitoring this situation. However, traditional target detection methods are vulnerable to interference due to the weather, light, personnel density, location of surveillance cameras and other problems in the working environment, and the recognition and detection effect of such small targets is not very good. Therefore, this paper uses the high-precision YOLOv5 (You Only Look Once) as the target detection framework, and modifies its backbone network to improve its ability in small target recognition. The original backbone structure is cut and compressed, and the SwinT (Swin Transformer) modules are added to improve the overall recognition accuracy based on its powerful small target recognition ability. At the same time, SE (Squeeze and Excitation) and CBAM (Convolutional Block Attention Module) modules are added to further improve the recognition accuracy of the entire network. Finally, experiments are conducted on the SHWD (Safety Helmet Wearing Dataset) dataset. The experimental results show that compared to the network before modification, the accuracy of the optimized YOLO structure proposed in this paper is significantly improved on the validation dataset, with an average recognition accuracy of 93%.