Pub Date : 2023-06-14DOI: 10.1109/BMSB58369.2023.10211181
Lin Sun, Cong Ye, Hongru Li, Zile Lei, Yuqi Wang, Z. Wang
In multi-hop long range radio (LoRa) networks, the network overall energy consumption is extremely uneven because sensors near the gateway need to transmit data packets from other sensors. As a result, network life cycle is greatly shortened, the problem is particularly prominent in narrow rectangular space of urban underground pipeline corridor with limited energy supply. In this paper, a multi-hop LoRa link optimized scheduling method for energy saving is studied. Each LoRa sensor is segmented according to the distance from the gateway and variable-hop mechanism is used instead of adjacent-hop mechanism. In addition, a link optimization algorithm based on ϵ-greedy is designed to consider the characteristics of narrow spaces and LoRa sensors to model the network according to distance-ring exponential stations generator (DRESG) model. Simulation considers multiple service types and compares the energy-saving optimization effects of STSAA, neighbor-hopping and proposed E-greedy and match VH (EGAM-VH) algorithms.
在多跳远程无线电(LoRa)网络中,由于网关附近的传感器需要传输来自其他传感器的数据包,因此网络总体能耗极不均匀。从而大大缩短了网络的生命周期,在能源供应有限的城市地下管线走廊狭窄的矩形空间中,这一问题尤为突出。本文研究了一种多跳LoRa链路优化的节能调度方法。每个LoRa传感器根据到网关的距离进行分段,采用可变跳机制代替邻接跳机制。此外,设计了一种基于ϵ-greedy的链路优化算法,考虑狭窄空间和LoRa传感器的特点,根据距离环指数站发生器(DRESG)模型对网络进行建模。仿真考虑了多种业务类型,比较了STSAA、邻居跳频算法和提出的E-greedy and match VH (EGAM-VH)算法的节能优化效果。
{"title":"Multi-hop LoRa Link optimized scheduling method for energy saving in Power IoT","authors":"Lin Sun, Cong Ye, Hongru Li, Zile Lei, Yuqi Wang, Z. Wang","doi":"10.1109/BMSB58369.2023.10211181","DOIUrl":"https://doi.org/10.1109/BMSB58369.2023.10211181","url":null,"abstract":"In multi-hop long range radio (LoRa) networks, the network overall energy consumption is extremely uneven because sensors near the gateway need to transmit data packets from other sensors. As a result, network life cycle is greatly shortened, the problem is particularly prominent in narrow rectangular space of urban underground pipeline corridor with limited energy supply. In this paper, a multi-hop LoRa link optimized scheduling method for energy saving is studied. Each LoRa sensor is segmented according to the distance from the gateway and variable-hop mechanism is used instead of adjacent-hop mechanism. In addition, a link optimization algorithm based on ϵ-greedy is designed to consider the characteristics of narrow spaces and LoRa sensors to model the network according to distance-ring exponential stations generator (DRESG) model. Simulation considers multiple service types and compares the energy-saving optimization effects of STSAA, neighbor-hopping and proposed E-greedy and match VH (EGAM-VH) algorithms.","PeriodicalId":13080,"journal":{"name":"IEEE international Symposium on Broadband Multimedia Systems and Broadcasting","volume":"2 1","pages":"1-6"},"PeriodicalIF":0.0,"publicationDate":"2023-06-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80429443","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-06-14DOI: 10.1109/BMSB58369.2023.10211131
Hua Zhang, Sen Xu, Jincan Xin, Hua Xu
Artificial Intelligence (AI) and Blockchain (BC) are two of the most popular and disruptive technologies for future wireless communication network. AI can provide management and strategy for distributed nodes of network through its powerful learning and automatic adaptation capabilities, which helps the network realizing intelligent endogenous. Blockchain can provide the strict security requirements of future communication systems, as well as the desired transparency and trustiness for the decentralized network, due to its built-in security features. There are tangible signs that the future research will focus on how to explore and utilize the potentials of AI and blockchain more thoroughly in future network. This paper first introduces the AI-based network intelligent system and blockchain system respectively, and focuses on the analysis of blockchain data management technology for the future network. Finally, it looks forward to the further development of AI and blockchain on 6G networks.
{"title":"Blockchain based data management technology for future intelligent network architecture","authors":"Hua Zhang, Sen Xu, Jincan Xin, Hua Xu","doi":"10.1109/BMSB58369.2023.10211131","DOIUrl":"https://doi.org/10.1109/BMSB58369.2023.10211131","url":null,"abstract":"Artificial Intelligence (AI) and Blockchain (BC) are two of the most popular and disruptive technologies for future wireless communication network. AI can provide management and strategy for distributed nodes of network through its powerful learning and automatic adaptation capabilities, which helps the network realizing intelligent endogenous. Blockchain can provide the strict security requirements of future communication systems, as well as the desired transparency and trustiness for the decentralized network, due to its built-in security features. There are tangible signs that the future research will focus on how to explore and utilize the potentials of AI and blockchain more thoroughly in future network. This paper first introduces the AI-based network intelligent system and blockchain system respectively, and focuses on the analysis of blockchain data management technology for the future network. Finally, it looks forward to the further development of AI and blockchain on 6G networks.","PeriodicalId":13080,"journal":{"name":"IEEE international Symposium on Broadband Multimedia Systems and Broadcasting","volume":"241 1","pages":"1-6"},"PeriodicalIF":0.0,"publicationDate":"2023-06-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76750872","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-06-14DOI: 10.1109/BMSB58369.2023.10211197
Zhicong Xu, Jintao Wang
This paper investigates the optimal linear precoder for multi-carrier visible light communication (VLC) multiple-input multiple-output (MIMO) systems. In a practical VLC system, the limited linear region of light-emitting diodes (LEDs) leads to the clipping noise at the transmitter. The optimal precoder are designed to minimize the mean square error (MSE) at each subcarrier in the presence of clipping noise. Two problems are mainly solved: firstly, find the optimal input power of each LED to decrease the effect of clipping. Secondly, allocate these power for a precoder to minimize MSE. The joint clipping and precoding algorithm (JCPA) is proposed to solve the whole problem. Simulation results have demonstrated that the proposed scheme outperforms the existing solutions.
{"title":"Optimal Precoding Design for VLC MIMO-OFDM Systems in the Presence of Clipping Noise","authors":"Zhicong Xu, Jintao Wang","doi":"10.1109/BMSB58369.2023.10211197","DOIUrl":"https://doi.org/10.1109/BMSB58369.2023.10211197","url":null,"abstract":"This paper investigates the optimal linear precoder for multi-carrier visible light communication (VLC) multiple-input multiple-output (MIMO) systems. In a practical VLC system, the limited linear region of light-emitting diodes (LEDs) leads to the clipping noise at the transmitter. The optimal precoder are designed to minimize the mean square error (MSE) at each subcarrier in the presence of clipping noise. Two problems are mainly solved: firstly, find the optimal input power of each LED to decrease the effect of clipping. Secondly, allocate these power for a precoder to minimize MSE. The joint clipping and precoding algorithm (JCPA) is proposed to solve the whole problem. Simulation results have demonstrated that the proposed scheme outperforms the existing solutions.","PeriodicalId":13080,"journal":{"name":"IEEE international Symposium on Broadband Multimedia Systems and Broadcasting","volume":"88 1","pages":"1-6"},"PeriodicalIF":0.0,"publicationDate":"2023-06-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76821852","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-06-14DOI: 10.1109/BMSB58369.2023.10211250
Angelo Tropeano, C. Suraci, Giuseppe Marrara, D. Battaglia, A. Molinaro, G. Araniti
The sixth-generation (6G) cellular networks and the Internet of Things (IoT) paradigm will be crucial for the fully connected and digitalized world of the future. Many applications in various fields, spanning from smart home to automated industry, will benefit from the use of typically resource-constrained IoT devices leveraging 6G connections, provided that they are supported by protocols and communication techniques capable of optimizing the use of their resources. Long Range Radio (LoRa) is an emerging Low-Power Wide-Area (LPWA) technology that can effectively address this requirement. However, some challenges must be overcome for it to be successful in 6G, including the need to extend the coverage area easily affected by physical factors, such as adverse weather conditions. This paper discusses the potential benefits of using the multi-hop over a LoRaWAN (Long Range Wide Area Network) architecture in the context of IoT applications in the 6G systems. Indeed, in this work, we present a field test conducted to analyze the performance of a network architecture based on the use of LoRa for different-size images transmission, particularly by exploiting the multi-hop approach to extend the network coverage. The obtained results suggest that applying the multi-hop LoRa technique could be useful in future 6G IoT networks, especially in remote areas where the deployment of additional gateways could be expensive.
{"title":"A Field Test for Maximizing Coverage through Multi-Hop D2D LoRa Transmission","authors":"Angelo Tropeano, C. Suraci, Giuseppe Marrara, D. Battaglia, A. Molinaro, G. Araniti","doi":"10.1109/BMSB58369.2023.10211250","DOIUrl":"https://doi.org/10.1109/BMSB58369.2023.10211250","url":null,"abstract":"The sixth-generation (6G) cellular networks and the Internet of Things (IoT) paradigm will be crucial for the fully connected and digitalized world of the future. Many applications in various fields, spanning from smart home to automated industry, will benefit from the use of typically resource-constrained IoT devices leveraging 6G connections, provided that they are supported by protocols and communication techniques capable of optimizing the use of their resources. Long Range Radio (LoRa) is an emerging Low-Power Wide-Area (LPWA) technology that can effectively address this requirement. However, some challenges must be overcome for it to be successful in 6G, including the need to extend the coverage area easily affected by physical factors, such as adverse weather conditions. This paper discusses the potential benefits of using the multi-hop over a LoRaWAN (Long Range Wide Area Network) architecture in the context of IoT applications in the 6G systems. Indeed, in this work, we present a field test conducted to analyze the performance of a network architecture based on the use of LoRa for different-size images transmission, particularly by exploiting the multi-hop approach to extend the network coverage. The obtained results suggest that applying the multi-hop LoRa technique could be useful in future 6G IoT networks, especially in remote areas where the deployment of additional gateways could be expensive.","PeriodicalId":13080,"journal":{"name":"IEEE international Symposium on Broadband Multimedia Systems and Broadcasting","volume":"146 1","pages":"1-5"},"PeriodicalIF":0.0,"publicationDate":"2023-06-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87617101","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-06-14DOI: 10.1109/BMSB58369.2023.10211422
Dan Liu, Enfang Cui, Yun Shen, Peng Ding, Zhichao Zhang
With the development of big data and artificial intelligence, problems related to data privacy have emerged in smart cities. In the context of large-scale data, federated learning can effectively utilize data resources and ensure user data privacy. This paper designs a training mechanism of edge cloud collaborative federated learning model for smart city applications, so that the model training is carried out on the edge side, without the need to gather the original data set to the cloud computing center, to ensure the privacy and security of data. Finally, it is verified and tested in the vehicle recognition scene in the traffic field. The results show that this mechanism has certain advantages in detecting delay and protecting privacy.
{"title":"Federated Learning Model Training Mechanism with Edge Cloud Collaboration for Services in Smart Cities","authors":"Dan Liu, Enfang Cui, Yun Shen, Peng Ding, Zhichao Zhang","doi":"10.1109/BMSB58369.2023.10211422","DOIUrl":"https://doi.org/10.1109/BMSB58369.2023.10211422","url":null,"abstract":"With the development of big data and artificial intelligence, problems related to data privacy have emerged in smart cities. In the context of large-scale data, federated learning can effectively utilize data resources and ensure user data privacy. This paper designs a training mechanism of edge cloud collaborative federated learning model for smart city applications, so that the model training is carried out on the edge side, without the need to gather the original data set to the cloud computing center, to ensure the privacy and security of data. Finally, it is verified and tested in the vehicle recognition scene in the traffic field. The results show that this mechanism has certain advantages in detecting delay and protecting privacy.","PeriodicalId":13080,"journal":{"name":"IEEE international Symposium on Broadband Multimedia Systems and Broadcasting","volume":"14 1","pages":"1-5"},"PeriodicalIF":0.0,"publicationDate":"2023-06-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87123577","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-06-14DOI: 10.1109/BMSB58369.2023.10211397
Jiong-Qi Wang, Shuai Guo, Q. Wang, Rong Xie, Li Song
Recently, human rendering has attracted many attention thanks to its vast applications. With new advances in neural rendering and radiance field, synthesizing realistic novel view images from multi-view camera images can be achived with less manual labour. However, due to the data-driven nature of such algorithms, the efficiency for both time and computation can be unsatisfying. Hence, we propose an efficient human rendering pipeline, generating geometric and semantic guidances as priors to further enhance both efficiency and quality. Specifically, a semantic human part parsing guides the pixel sampling in 2D space, and a mesh prior is utilized to guide an occupancy field for effective ray sampling in 3D space. As a result, we achieved considerable improvement over previous methods in both efficiency and rendering quality.
{"title":"Efficient Human Rendering with Geometric and Semantic Priors","authors":"Jiong-Qi Wang, Shuai Guo, Q. Wang, Rong Xie, Li Song","doi":"10.1109/BMSB58369.2023.10211397","DOIUrl":"https://doi.org/10.1109/BMSB58369.2023.10211397","url":null,"abstract":"Recently, human rendering has attracted many attention thanks to its vast applications. With new advances in neural rendering and radiance field, synthesizing realistic novel view images from multi-view camera images can be achived with less manual labour. However, due to the data-driven nature of such algorithms, the efficiency for both time and computation can be unsatisfying. Hence, we propose an efficient human rendering pipeline, generating geometric and semantic guidances as priors to further enhance both efficiency and quality. Specifically, a semantic human part parsing guides the pixel sampling in 2D space, and a mesh prior is utilized to guide an occupancy field for effective ray sampling in 3D space. As a result, we achieved considerable improvement over previous methods in both efficiency and rendering quality.","PeriodicalId":13080,"journal":{"name":"IEEE international Symposium on Broadband Multimedia Systems and Broadcasting","volume":"8 1","pages":"1-6"},"PeriodicalIF":0.0,"publicationDate":"2023-06-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82706780","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-06-14DOI: 10.1109/BMSB58369.2023.10211115
Yifu. Liu, Quan Zhou, Xia Jing
Beam selection in millimeter wave (mmWave) communication systems rely on information about the environment surrounding the communication target, and the use of deep learning methods to analyze sensing data acquired by low-cost radar sensors can effectively reduce communication overhead. In this paper, we further investigate the radar-based beam selection problem using deep learning methods. The beam selection performance of the Feature Pyramid Network (FPN) network and an optimized version of the Residual Networks (Resnet) network is evaluated for a large-scale real-world dataset, DeepSense 6G, and a targeted network is proposed for beam selection. The experimental results show that the accuracy of beam selection is improved by 18.5% compared to the original Lenet network.
{"title":"Deep learning-based radar-assisted beam prediction","authors":"Yifu. Liu, Quan Zhou, Xia Jing","doi":"10.1109/BMSB58369.2023.10211115","DOIUrl":"https://doi.org/10.1109/BMSB58369.2023.10211115","url":null,"abstract":"Beam selection in millimeter wave (mmWave) communication systems rely on information about the environment surrounding the communication target, and the use of deep learning methods to analyze sensing data acquired by low-cost radar sensors can effectively reduce communication overhead. In this paper, we further investigate the radar-based beam selection problem using deep learning methods. The beam selection performance of the Feature Pyramid Network (FPN) network and an optimized version of the Residual Networks (Resnet) network is evaluated for a large-scale real-world dataset, DeepSense 6G, and a targeted network is proposed for beam selection. The experimental results show that the accuracy of beam selection is improved by 18.5% compared to the original Lenet network.","PeriodicalId":13080,"journal":{"name":"IEEE international Symposium on Broadband Multimedia Systems and Broadcasting","volume":"76 1","pages":"1-5"},"PeriodicalIF":0.0,"publicationDate":"2023-06-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89021467","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
the advancement of 6G commercial use, a large number of new applications that rely on high speed and low latency have emerged, e.g., Mixed Reality (MR). Considering the transmission of service content from the central cloud to the MR device will bring great delay and energy consumption, the Mobile Edge Computing (MEC) technology has been introduced. It can reduce latency and energy consumption by caching the user’s pre-rendered environment frames on the MEC server. With the limited cache resources on the MEC server, a content caching scheme based deep reinforcement learning (DRL) method was proposed to make caching decisions. Then, a new utility function was proposed to measure the performance of the caching scheme, and the proposed scheme was simulated and verified.
{"title":"A DRL Enhanced Caching Based on Age of Information for 6G Mobile Edge Computation","authors":"Yuhan Liu, Chaowei Wang, Yujun Shi, Danhao Deng, Tengsen Ma, Weidong Wang","doi":"10.1109/BMSB58369.2023.10211109","DOIUrl":"https://doi.org/10.1109/BMSB58369.2023.10211109","url":null,"abstract":"the advancement of 6G commercial use, a large number of new applications that rely on high speed and low latency have emerged, e.g., Mixed Reality (MR). Considering the transmission of service content from the central cloud to the MR device will bring great delay and energy consumption, the Mobile Edge Computing (MEC) technology has been introduced. It can reduce latency and energy consumption by caching the user’s pre-rendered environment frames on the MEC server. With the limited cache resources on the MEC server, a content caching scheme based deep reinforcement learning (DRL) method was proposed to make caching decisions. Then, a new utility function was proposed to measure the performance of the caching scheme, and the proposed scheme was simulated and verified.","PeriodicalId":13080,"journal":{"name":"IEEE international Symposium on Broadband Multimedia Systems and Broadcasting","volume":"13 1","pages":"1-6"},"PeriodicalIF":0.0,"publicationDate":"2023-06-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90505362","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The accurate recognition of outdoor weather has a very important application value in weather prediction, disaster warning, automatic driving and other fields. Fog, rain, snow and other bad weather pose a serious threat to driving safety, which is the focus of outdoor weather recognition. At present, video surveillance system has been widely used in highway surveillance system, and fog detection based on video image has received extensive attention. This paper will study fog detection technology based on dynamic texture features.This paper uses MATLAB as the simulation platform to realize the fog detection based on optical flow method. First of all, considering that the fog area in the video image will change in shape and concentration over time, appropriate anti-interference methods including median filtering are selected to complete the preprocessing; Secondly, according to the characteristics of fog, such as diffusion, the method of feature calculation and motion analysis based on optical flow is studied; Finally, the corresponding motion rules and analysis methods are established to detect and recognize the foggy video regions. The smoke video is processed in this paper, and the results show that the fog area can be accurately detected and the detection effect is good. The experimental results show that the processing method in this paper has a good effect, and has a high application value in video fog detection.
{"title":"Video Fog Detection Based on Dynamic Texture Analysis","authors":"Fei Wang, Manyu Wang, Han Li, Zongkai Yang, Youbin Song, Zhihua Chen","doi":"10.1109/BMSB58369.2023.10211187","DOIUrl":"https://doi.org/10.1109/BMSB58369.2023.10211187","url":null,"abstract":"The accurate recognition of outdoor weather has a very important application value in weather prediction, disaster warning, automatic driving and other fields. Fog, rain, snow and other bad weather pose a serious threat to driving safety, which is the focus of outdoor weather recognition. At present, video surveillance system has been widely used in highway surveillance system, and fog detection based on video image has received extensive attention. This paper will study fog detection technology based on dynamic texture features.This paper uses MATLAB as the simulation platform to realize the fog detection based on optical flow method. First of all, considering that the fog area in the video image will change in shape and concentration over time, appropriate anti-interference methods including median filtering are selected to complete the preprocessing; Secondly, according to the characteristics of fog, such as diffusion, the method of feature calculation and motion analysis based on optical flow is studied; Finally, the corresponding motion rules and analysis methods are established to detect and recognize the foggy video regions. The smoke video is processed in this paper, and the results show that the fog area can be accurately detected and the detection effect is good. The experimental results show that the processing method in this paper has a good effect, and has a high application value in video fog detection.","PeriodicalId":13080,"journal":{"name":"IEEE international Symposium on Broadband Multimedia Systems and Broadcasting","volume":"24 1","pages":"1-4"},"PeriodicalIF":0.0,"publicationDate":"2023-06-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88528027","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-06-14DOI: 10.1109/BMSB58369.2023.10211610
Lin Du, Manyu Wang, Zongkai Yang, Ke Zhang, Yanhan Li, Zhihua Chen
The COVID-19 (Corona Virus Disease 2019) outbroke in 2019, and in order to stop the epidemic, wearing masks is a very critical part. The use of deep learning technology for mask wearing detection can improve the detection accuracy and reduce human and material resources. In this paper, the YOLOv5(You Only Look Once version 5) model is used for mask wearing detection. In the experimental validation phase, the performance of YOLOv5 is tested by using different methods, respectively. Finally, it is found that the detection performance is optimal with the training method of labelsmoothing, and the Mean Average Precision (mAP) can reach 0.9252.
2019年爆发了COVID-19(2019冠状病毒病),为了阻止疫情的蔓延,戴口罩是非常关键的一环。利用深度学习技术进行口罩佩戴检测,可以提高检测精度,减少人力物力。本文使用YOLOv5(You Only Look Once version 5)模型进行口罩佩戴检测。在实验验证阶段,分别采用不同的方法对YOLOv5的性能进行了测试。最后,发现使用标签平滑训练方法的检测性能最优,Mean Average Precision (mAP)可以达到0.9252。
{"title":"A method of mask wearing state detection based on YOLOv5","authors":"Lin Du, Manyu Wang, Zongkai Yang, Ke Zhang, Yanhan Li, Zhihua Chen","doi":"10.1109/BMSB58369.2023.10211610","DOIUrl":"https://doi.org/10.1109/BMSB58369.2023.10211610","url":null,"abstract":"The COVID-19 (Corona Virus Disease 2019) outbroke in 2019, and in order to stop the epidemic, wearing masks is a very critical part. The use of deep learning technology for mask wearing detection can improve the detection accuracy and reduce human and material resources. In this paper, the YOLOv5(You Only Look Once version 5) model is used for mask wearing detection. In the experimental validation phase, the performance of YOLOv5 is tested by using different methods, respectively. Finally, it is found that the detection performance is optimal with the training method of labelsmoothing, and the Mean Average Precision (mAP) can reach 0.9252.","PeriodicalId":13080,"journal":{"name":"IEEE international Symposium on Broadband Multimedia Systems and Broadcasting","volume":"1 1","pages":"1-5"},"PeriodicalIF":0.0,"publicationDate":"2023-06-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88659410","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}