Construction personnel dress code detection based on YOLO framework

IF 8.4 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE CAAI Transactions on Intelligence Technology Pub Date : 2024-03-18 DOI:10.1049/cit2.12312
Yunkai Lyu, Xiaobing Yang, Ai Guan, Jingwen Wang, Leni Dai
{"title":"Construction personnel dress code detection based on YOLO framework","authors":"Yunkai Lyu,&nbsp;Xiaobing Yang,&nbsp;Ai Guan,&nbsp;Jingwen Wang,&nbsp;Leni Dai","doi":"10.1049/cit2.12312","DOIUrl":null,"url":null,"abstract":"<p>It is important for construction personnel to observe the dress code, such as the correct wearing of safety helmets and reflective vests is conducive to protecting the workers' lives and safety of construction. A YOLO network-based detection algorithm is proposed for the construction personnel dress code (YOLO-CPDC). Firstly, Multi-Head Self-Attention (MHSA) is introduced into the backbone network to build a hybrid backbone, called Convolution MHSA Network (CMNet). The CMNet gives the model a global field of view and enhances the detection capability of the model for small and obscured targets. Secondly, an efficient and lightweight convolution module is designed. It is named Ghost Shuffle Attention-Conv-BN-SiLU (GSA-CBS) and is used in the neck network. The GSANeck network reduces the model size without affecting the performance. Finally, the SIoU is used in the loss function and Soft NMS is used for post-processing. Experimental results on the self-constructed dataset show that YOLO-CPDC algorithm has higher detection accuracy than current methods. YOLO-CPDC achieves a mAP50 of 93.6%. Compared with the YOLOv5s, the number of parameters of our model is reduced by 18% and the mAP50 is improved by 1.1%. Overall, this research effectively meets the actual demand of dress code detection in construction scenes.</p>","PeriodicalId":46211,"journal":{"name":"CAAI Transactions on Intelligence Technology","volume":"9 3","pages":"709-721"},"PeriodicalIF":8.4000,"publicationDate":"2024-03-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1049/cit2.12312","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"CAAI Transactions on Intelligence Technology","FirstCategoryId":"94","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1049/cit2.12312","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

Abstract

It is important for construction personnel to observe the dress code, such as the correct wearing of safety helmets and reflective vests is conducive to protecting the workers' lives and safety of construction. A YOLO network-based detection algorithm is proposed for the construction personnel dress code (YOLO-CPDC). Firstly, Multi-Head Self-Attention (MHSA) is introduced into the backbone network to build a hybrid backbone, called Convolution MHSA Network (CMNet). The CMNet gives the model a global field of view and enhances the detection capability of the model for small and obscured targets. Secondly, an efficient and lightweight convolution module is designed. It is named Ghost Shuffle Attention-Conv-BN-SiLU (GSA-CBS) and is used in the neck network. The GSANeck network reduces the model size without affecting the performance. Finally, the SIoU is used in the loss function and Soft NMS is used for post-processing. Experimental results on the self-constructed dataset show that YOLO-CPDC algorithm has higher detection accuracy than current methods. YOLO-CPDC achieves a mAP50 of 93.6%. Compared with the YOLOv5s, the number of parameters of our model is reduced by 18% and the mAP50 is improved by 1.1%. Overall, this research effectively meets the actual demand of dress code detection in construction scenes.

Abstract Image

查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
基于 YOLO 框架的施工人员着装检测
施工人员遵守着装规范非常重要,如正确佩戴安全帽和反光背心,有利于保护工人的生命和施工安全。本文提出了一种基于 YOLO 网络的施工人员着装规范检测算法(YOLO-CPDC)。首先,在骨干网络中引入多头自注意力(MHSA),构建一个混合骨干网络,称为卷积 MHSA 网络(CMNet)。CMNet 为模型提供了一个全局视场,增强了模型对小目标和模糊目标的探测能力。其次,设计了一个高效、轻量级的卷积模块。它被命名为 Ghost Shuffle Attention-Conv-BN-SiLU (GSA-CBS),用于颈部网络。GSANeck 网络在不影响性能的情况下减少了模型大小。最后,SIoU 被用于损失函数,Soft NMS 被用于后处理。在自建数据集上的实验结果表明,YOLO-CPDC 算法比现有方法具有更高的检测精度。YOLO-CPDC 的 mAP50 高达 93.6%。与 YOLOv5s 相比,我们的模型参数数减少了 18%,mAP50 提高了 1.1%。总之,这项研究有效地满足了建筑场景中着装检测的实际需求。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
CAAI Transactions on Intelligence Technology
CAAI Transactions on Intelligence Technology COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE-
CiteScore
11.00
自引率
3.90%
发文量
134
审稿时长
35 weeks
期刊介绍: CAAI Transactions on Intelligence Technology is a leading venue for original research on the theoretical and experimental aspects of artificial intelligence technology. We are a fully open access journal co-published by the Institution of Engineering and Technology (IET) and the Chinese Association for Artificial Intelligence (CAAI) providing research which is openly accessible to read and share worldwide.
期刊最新文献
Guest Editorial: Knowledge-based deep learning system in bio-medicine Guest Editorial: Special issue on trustworthy machine learning for behavioural and social computing A fault-tolerant and scalable boosting method over vertically partitioned data Multi-objective interval type-2 fuzzy linear programming problem with vagueness in coefficient Prediction and optimisation of gasoline quality in petroleum refining: The use of machine learning model as a surrogate in optimisation framework
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1