Pub Date : 2021-12-01DOI: 10.1109/GLOBECOM46510.2021.9685489
Huanzhuo Wu, Jia He, Máté Tömösközi, Zuo Xiang, F. Fitzek
Modern manufacturers are currently undertaking the integration of novel digital technologies - such as 5G-based wireless networks, the Internet of Things (IoT), and cloud computing - to elevate their production process to a brand new level, the level of smart factories. In the setting of a modern smart factory, time-critical applications are increasingly important to facilitate efficient and safe production. However, these applications suffer from delays in data transmission and processing due to the high density of wireless sensors and the large volumes of data that they generate. As the advent of next-generation networks has made network nodes intelligent and capable of handling multiple network functions, the increased computational power of the nodes makes it possible to offload some of the computational overhead. In this paper, we show for the first time our IA-Net-Lite industrial anomaly detection system with the novel capability of in-network data processing. IA-Net-Lite utilizes intelligent network devices to combine data transmission and processing, as well as to progressively filter redundant data in order to optimize service latency. By testing in a practical network emulator, we showed that the proposed approach can reduce the service latency by up to 40%. Moreover, the benefits of our approach could potentially be exploited in other large-volume and artificial intelligence applications.
{"title":"In-Network Processing for Low-Latency Industrial Anomaly Detection in Softwarized Networks","authors":"Huanzhuo Wu, Jia He, Máté Tömösközi, Zuo Xiang, F. Fitzek","doi":"10.1109/GLOBECOM46510.2021.9685489","DOIUrl":"https://doi.org/10.1109/GLOBECOM46510.2021.9685489","url":null,"abstract":"Modern manufacturers are currently undertaking the integration of novel digital technologies - such as 5G-based wireless networks, the Internet of Things (IoT), and cloud computing - to elevate their production process to a brand new level, the level of smart factories. In the setting of a modern smart factory, time-critical applications are increasingly important to facilitate efficient and safe production. However, these applications suffer from delays in data transmission and processing due to the high density of wireless sensors and the large volumes of data that they generate. As the advent of next-generation networks has made network nodes intelligent and capable of handling multiple network functions, the increased computational power of the nodes makes it possible to offload some of the computational overhead. In this paper, we show for the first time our IA-Net-Lite industrial anomaly detection system with the novel capability of in-network data processing. IA-Net-Lite utilizes intelligent network devices to combine data transmission and processing, as well as to progressively filter redundant data in order to optimize service latency. By testing in a practical network emulator, we showed that the proposed approach can reduce the service latency by up to 40%. Moreover, the benefits of our approach could potentially be exploited in other large-volume and artificial intelligence applications.","PeriodicalId":200641,"journal":{"name":"2021 IEEE Global Communications Conference (GLOBECOM)","volume":"74 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116190196","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-12-01DOI: 10.1109/GLOBECOM46510.2021.9685565
Hong-Bae Jeon, Sung-Ho Park, Jaedon Park, Kaibin Huang, C. Chae
In this paper, we propose a novel wireless backhaul architecture, mounted on a high-altitude aerial platform, which is enabled by reconfigurable intelligent surface (RIS). We assume a sudden increase in traffic in an urban area, and to serve the ground users therein, authorities rapidly deploy unmanned-aerial-vehicle base-stations (UAV-BSs). In this scenario, since the direct backhaul link from the ground source can be blocked due to several obstacles from the urban area, we propose reflecting the backhaul signal using aerial-RIS and the phase of each RIS element, which leads to an increase in energy-efficiency ensuring the reliable backhaul link for every UAV-BS. We optimize the placement and array-partitioning strategy of aerial-RIS and the phase of each RIS element, which leads to an increase of energy-efficiency under guaranteeing the reliable backhaul link for every UAV-BS. We show that the complexity of our algorithm is upper-bounded by the quadratic order, thus implying high computational efficiency. We verify the performance of the proposed algorithm via extensive numerical evaluations and show that our method achieves an outstanding performance in terms of energy-efficiency compared to benchmark schemes.
{"title":"RIS-assisted Aerial Backhaul System for UAV-BSs: An Energy-efficiency Perspective","authors":"Hong-Bae Jeon, Sung-Ho Park, Jaedon Park, Kaibin Huang, C. Chae","doi":"10.1109/GLOBECOM46510.2021.9685565","DOIUrl":"https://doi.org/10.1109/GLOBECOM46510.2021.9685565","url":null,"abstract":"In this paper, we propose a novel wireless backhaul architecture, mounted on a high-altitude aerial platform, which is enabled by reconfigurable intelligent surface (RIS). We assume a sudden increase in traffic in an urban area, and to serve the ground users therein, authorities rapidly deploy unmanned-aerial-vehicle base-stations (UAV-BSs). In this scenario, since the direct backhaul link from the ground source can be blocked due to several obstacles from the urban area, we propose reflecting the backhaul signal using aerial-RIS and the phase of each RIS element, which leads to an increase in energy-efficiency ensuring the reliable backhaul link for every UAV-BS. We optimize the placement and array-partitioning strategy of aerial-RIS and the phase of each RIS element, which leads to an increase of energy-efficiency under guaranteeing the reliable backhaul link for every UAV-BS. We show that the complexity of our algorithm is upper-bounded by the quadratic order, thus implying high computational efficiency. We verify the performance of the proposed algorithm via extensive numerical evaluations and show that our method achieves an outstanding performance in terms of energy-efficiency compared to benchmark schemes.","PeriodicalId":200641,"journal":{"name":"2021 IEEE Global Communications Conference (GLOBECOM)","volume":"136 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116318401","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Unmanned aerial vehicles (UAVs) have attracted growing attention in enhancing the performance of mobile wireless sensor networks (MWSNs) since they can act as the aerial base stations (ABSs) and have the autonomous nature to collect data. In this paper, we consider to construct a virtual antenna array (VAA) consists of mobile sensor nodes (MSNs) and adopt the collaborative beamforming (CB) to achieve the long-distance and efficient uplink data transmissions with the ABSs. First, we formulate a high data transmission rate multi-objective optimization problem (HDTRMOP) of the CB-based UAV-assisted MWSN to simultaneously improve the total transmission rates, suppress the total maximum sidelobe levels (SLLs) and reduce the total motion energy consumptions of MSNs by jointly optimizing the positions and excitation current weights of MSN-enabled VAA, and the order of communicating with different ABSs. Then, we propose an improved non-dominated sorting genetic algorithm-III (INSGA-III) with chaos initialization, average grade mechanism and hybrid-solution generate strategy to solve the problem. Simulation results verify that the proposed algorithm can effectively solve the formulated HDTRMOP and it has better performance than some other benchmark methods.
由于无人机可以充当空中基站(abs)并具有自主收集数据的特性,因此在提高移动无线传感器网络(MWSNs)的性能方面受到越来越多的关注。本文考虑构建由移动传感器节点(msn)组成的虚拟天线阵列(VAA),并采用协同波束形成(CB)技术与移动传感器节点实现远距离、高效的上行数据传输。首先,我们提出了基于cb的无人机辅助MWSN的高数据传输速率多目标优化问题(HDTRMOP),通过联合优化使能MWSN的VAA的位置和激励电流权重,以及与不同abs的通信顺序,同时提高总传输速率,抑制总最大旁瓣电平(SLLs),降低MWSN的总运动能耗。然后,我们提出了一种改进的非支配排序遗传算法- iii (INSGA-III),该算法具有混沌初始化、平均等级机制和混合解生成策略。仿真结果验证了所提算法能有效求解拟定的HDTRMOP,且性能优于其他基准算法。
{"title":"Uplink Data Transmission Based on Collaborative Beamforming in UAV-assisted MWSNs","authors":"Aimin Wang, Yuxin Wang, Geng Sun, Jiahui Li, Shuang Liang, Yanheng Liu","doi":"10.1109/GLOBECOM46510.2021.9685853","DOIUrl":"https://doi.org/10.1109/GLOBECOM46510.2021.9685853","url":null,"abstract":"Unmanned aerial vehicles (UAVs) have attracted growing attention in enhancing the performance of mobile wireless sensor networks (MWSNs) since they can act as the aerial base stations (ABSs) and have the autonomous nature to collect data. In this paper, we consider to construct a virtual antenna array (VAA) consists of mobile sensor nodes (MSNs) and adopt the collaborative beamforming (CB) to achieve the long-distance and efficient uplink data transmissions with the ABSs. First, we formulate a high data transmission rate multi-objective optimization problem (HDTRMOP) of the CB-based UAV-assisted MWSN to simultaneously improve the total transmission rates, suppress the total maximum sidelobe levels (SLLs) and reduce the total motion energy consumptions of MSNs by jointly optimizing the positions and excitation current weights of MSN-enabled VAA, and the order of communicating with different ABSs. Then, we propose an improved non-dominated sorting genetic algorithm-III (INSGA-III) with chaos initialization, average grade mechanism and hybrid-solution generate strategy to solve the problem. Simulation results verify that the proposed algorithm can effectively solve the formulated HDTRMOP and it has better performance than some other benchmark methods.","PeriodicalId":200641,"journal":{"name":"2021 IEEE Global Communications Conference (GLOBECOM)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116512518","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Distributed Denial of Service (DDoS) flooding attacks have been a severe threat to the Internet for decades. These attacks usually are launched by exhausting bandwidth, network resources or server resources. Since most of these attacks are launched abruptly and severely, it is crucial to develop an efficient DDoS flooding attack detection system. In this paper, we present Slider, an online sketch-based DDoS flooding attack detection system. Slider utilizes a new type of sketch structure, namely Rotation Sketch, to effectively detect DDoS flooding attacks and efficiently identify the malicious hosts. Meanwhile, Slider also learns the characteristics of the current network during the time specified by the network operator to periodically update the parameters of its detection model. We have developed a prototype of Slider and the evaluation results on real-world traffic and public DDoS/DoS attack datasets demonstrate that Slider can effectively detect various DDoS flooding attacks with high precision and robustness.
{"title":"Slider: Towards Precise, Robust and Updatable Sketch-based DDoS Flooding Attack Detection","authors":"Xin Cheng, Zhiliang Wang, Shize Zhang, Jia Li, Jiahai Yang, Xinran Liu","doi":"10.1109/GLOBECOM46510.2021.9685622","DOIUrl":"https://doi.org/10.1109/GLOBECOM46510.2021.9685622","url":null,"abstract":"Distributed Denial of Service (DDoS) flooding attacks have been a severe threat to the Internet for decades. These attacks usually are launched by exhausting bandwidth, network resources or server resources. Since most of these attacks are launched abruptly and severely, it is crucial to develop an efficient DDoS flooding attack detection system. In this paper, we present Slider, an online sketch-based DDoS flooding attack detection system. Slider utilizes a new type of sketch structure, namely Rotation Sketch, to effectively detect DDoS flooding attacks and efficiently identify the malicious hosts. Meanwhile, Slider also learns the characteristics of the current network during the time specified by the network operator to periodically update the parameters of its detection model. We have developed a prototype of Slider and the evaluation results on real-world traffic and public DDoS/DoS attack datasets demonstrate that Slider can effectively detect various DDoS flooding attacks with high precision and robustness.","PeriodicalId":200641,"journal":{"name":"2021 IEEE Global Communications Conference (GLOBECOM)","volume":"138 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121532183","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Smart grids have been increasingly spotted as high-profile targets of cyber assaults over the years. To better understand the cyber threat landscape, honeypots have been widely used in the smart grid security community, i.e., identifying unauthorized penetration attempts and observing the behaviors in such activities. In this paper, we propose a honeypot-enabled optimal defense strategy selection approach for smart grids, based on a novel stochastic game. Specifically, the interactions between the attacker and smart grid defender are captured using our designed stochastic game, a non-cooperative two-player game with incomplete information. We take into account various possible defenses from a smart grid defender and offensive strate-gies from the attacker. Then the Nash equilibrium is calculated by the stochastic game model, which is derived exhibiting an optimal defense strategy for the smart grid defender. Extensive simulation experiments demonstrate the effectiveness of the proposed scheme.
{"title":"Honeypot-Enabled Optimal Defense Strategy Selection for Smart Grids","authors":"Beibei Li, Yaxin Shi, Qinglei Kong, Chao Zhai, Yuankai Ouyang","doi":"10.1109/GLOBECOM46510.2021.9685397","DOIUrl":"https://doi.org/10.1109/GLOBECOM46510.2021.9685397","url":null,"abstract":"Smart grids have been increasingly spotted as high-profile targets of cyber assaults over the years. To better understand the cyber threat landscape, honeypots have been widely used in the smart grid security community, i.e., identifying unauthorized penetration attempts and observing the behaviors in such activities. In this paper, we propose a honeypot-enabled optimal defense strategy selection approach for smart grids, based on a novel stochastic game. Specifically, the interactions between the attacker and smart grid defender are captured using our designed stochastic game, a non-cooperative two-player game with incomplete information. We take into account various possible defenses from a smart grid defender and offensive strate-gies from the attacker. Then the Nash equilibrium is calculated by the stochastic game model, which is derived exhibiting an optimal defense strategy for the smart grid defender. Extensive simulation experiments demonstrate the effectiveness of the proposed scheme.","PeriodicalId":200641,"journal":{"name":"2021 IEEE Global Communications Conference (GLOBECOM)","volume":"240 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121538844","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-12-01DOI: 10.1109/GLOBECOM46510.2021.9685192
A. Nepomuceno, P. Antunes, N. Alberto, P. André, H. Chi, A. Radwan, M. F. Domingues
In this paper, we present an optical fiber based architecture for non-invasive home monitoring of elder citizens. The approach is based on a network of optical fiber sensors distributed along the space/room to be monitored. The sensing mechanism is based on optical fiber Bragg grating (FBG) sensors, produced by the phase mask method and integrated within an accelerometer structure. This type of sensing solution has high sensitivity, allied with an extra resilience. Here we present the proposed architecture, the evaluation of different parameters that influence the accelerometer feedback, and the theoretical approach for indoor localization using this type of sensing mechanism. One advantage of the proposed solution is that it does not depend on wearables, which are considered burden for elders.
{"title":"Photonic sensors for non-invasive home monitoring of elders","authors":"A. Nepomuceno, P. Antunes, N. Alberto, P. André, H. Chi, A. Radwan, M. F. Domingues","doi":"10.1109/GLOBECOM46510.2021.9685192","DOIUrl":"https://doi.org/10.1109/GLOBECOM46510.2021.9685192","url":null,"abstract":"In this paper, we present an optical fiber based architecture for non-invasive home monitoring of elder citizens. The approach is based on a network of optical fiber sensors distributed along the space/room to be monitored. The sensing mechanism is based on optical fiber Bragg grating (FBG) sensors, produced by the phase mask method and integrated within an accelerometer structure. This type of sensing solution has high sensitivity, allied with an extra resilience. Here we present the proposed architecture, the evaluation of different parameters that influence the accelerometer feedback, and the theoretical approach for indoor localization using this type of sensing mechanism. One advantage of the proposed solution is that it does not depend on wearables, which are considered burden for elders.","PeriodicalId":200641,"journal":{"name":"2021 IEEE Global Communications Conference (GLOBECOM)","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114842184","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-12-01DOI: 10.1109/GLOBECOM46510.2021.9685966
Y. Cui, Heli Zhang, Hong Ji, Xi Li, Xun Shao
As a key technology of the sixth generation (6G), cloud-edge collaboration has attracted attention in the industrial Internet of Things (IIoT). However, the delay-sensitive and resource-intensive intelligent services in IIoT not only require a large number of computing resources to reduce the delay cost and energy consumption of devices but also require fast and accurate intelligent decisions to avoid service congestion. In this paper, we design an offloading scheme based on cloud-edge collaboration and edge collaboration, including four computing modes, which jointly consider the delay and energy optimization of devices. We propose a parallel deep learning-driven cooperative offloading (PDCO) algorithm, which weighs the real-time and accuracy of offloading scheme. To deal with the difficulty of obtaining labels, a low-complexity hybrid label processing method is designed to reduce the cost of labeling data, and then multiple parallel deep neural networks (DNNs) are trained to generate the best offloading decision timely. Simulation results show that the proposed algorithm can generate offloading decisions with more than 90% accuracy in 0.1s while considering green scheduling.
{"title":"Cloud-Edge Collaboration with Green Scheduling and Deep Learning for Industrial Internet of Things","authors":"Y. Cui, Heli Zhang, Hong Ji, Xi Li, Xun Shao","doi":"10.1109/GLOBECOM46510.2021.9685966","DOIUrl":"https://doi.org/10.1109/GLOBECOM46510.2021.9685966","url":null,"abstract":"As a key technology of the sixth generation (6G), cloud-edge collaboration has attracted attention in the industrial Internet of Things (IIoT). However, the delay-sensitive and resource-intensive intelligent services in IIoT not only require a large number of computing resources to reduce the delay cost and energy consumption of devices but also require fast and accurate intelligent decisions to avoid service congestion. In this paper, we design an offloading scheme based on cloud-edge collaboration and edge collaboration, including four computing modes, which jointly consider the delay and energy optimization of devices. We propose a parallel deep learning-driven cooperative offloading (PDCO) algorithm, which weighs the real-time and accuracy of offloading scheme. To deal with the difficulty of obtaining labels, a low-complexity hybrid label processing method is designed to reduce the cost of labeling data, and then multiple parallel deep neural networks (DNNs) are trained to generate the best offloading decision timely. Simulation results show that the proposed algorithm can generate offloading decisions with more than 90% accuracy in 0.1s while considering green scheduling.","PeriodicalId":200641,"journal":{"name":"2021 IEEE Global Communications Conference (GLOBECOM)","volume":"81 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124534273","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-12-01DOI: 10.1109/GLOBECOM46510.2021.9685972
Tatsuya Otoshi, S. Arakawa, M. Murata, T. Hosomi
In 5G, the network is divided into slices to provide communications with different characteristics, such as low latency and reliable communications (URRLC), multiple connections (MTC), and high speed and high capacity communications (eMBB), for different applications. Although the selection of network slices is often static, in practice, dynamic slice selection is required depending on the application situation. However, there are issues such as the slice change itself changing the application situation and the delay associated with the slice change. In this paper, we realize dynamic slice selection by recognizing the rough situation and the mapping between the recognized situation and the slice. The Bayesian Attractor Model (BAM) is used for recognition to achieve consistent recognition and is extended to the Dirichlet Process Mixture Model (DPMM) to achieve automatic attractor construction. The mapping between situations and slices is also automatically learned by using feedback. As an application of dynamic slice selection, we also show slice selection based on the video streaming situation. Through numerical examples, we show that our method can keep the quality of video streaming high while reducing slice changes.
{"title":"Non-parametric Decision-Making by Bayesian Attractor Model for Dynamic Slice Selection","authors":"Tatsuya Otoshi, S. Arakawa, M. Murata, T. Hosomi","doi":"10.1109/GLOBECOM46510.2021.9685972","DOIUrl":"https://doi.org/10.1109/GLOBECOM46510.2021.9685972","url":null,"abstract":"In 5G, the network is divided into slices to provide communications with different characteristics, such as low latency and reliable communications (URRLC), multiple connections (MTC), and high speed and high capacity communications (eMBB), for different applications. Although the selection of network slices is often static, in practice, dynamic slice selection is required depending on the application situation. However, there are issues such as the slice change itself changing the application situation and the delay associated with the slice change. In this paper, we realize dynamic slice selection by recognizing the rough situation and the mapping between the recognized situation and the slice. The Bayesian Attractor Model (BAM) is used for recognition to achieve consistent recognition and is extended to the Dirichlet Process Mixture Model (DPMM) to achieve automatic attractor construction. The mapping between situations and slices is also automatically learned by using feedback. As an application of dynamic slice selection, we also show slice selection based on the video streaming situation. Through numerical examples, we show that our method can keep the quality of video streaming high while reducing slice changes.","PeriodicalId":200641,"journal":{"name":"2021 IEEE Global Communications Conference (GLOBECOM)","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127784624","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-12-01DOI: 10.1109/GLOBECOM46510.2021.9685978
T. Ho, T. Nguyen, K. Nguyen, M. Cheriet
In this paper, we investigate the problem of robot swarm control in 5G mission-critical robotic applications, i.e., in an automated grid-based warehouse scenario. Such application requires both the kinematic energy consumption of the robots and the ultra-reliable and low latency communication (URLLC) between the central controller and the robot swarm to be jointly optimized in real-time. The problem is formulated as a nonconvex optimization problem since the achievable rate and decoding error probability with short block-length are neither convex nor concave in bandwidth and transmit power. We propose a deep reinforcement learning (DRL) based approach that employs the deep deterministic policy gradient (DDPG) method and convolutional neural network (CNN) to achieve a stationary optimal control policy that consists of a number of continuous and discrete actions. Numerical results show that our proposed multi-agent DDPG algorithm achieves a performance close to the optimal baseline and outperforms the single-agent DDPG in terms of decoding error probability and energy efficiency.
{"title":"Deep Reinforcement Learning for URLLC in 5G Mission-Critical Cloud Robotic Application","authors":"T. Ho, T. Nguyen, K. Nguyen, M. Cheriet","doi":"10.1109/GLOBECOM46510.2021.9685978","DOIUrl":"https://doi.org/10.1109/GLOBECOM46510.2021.9685978","url":null,"abstract":"In this paper, we investigate the problem of robot swarm control in 5G mission-critical robotic applications, i.e., in an automated grid-based warehouse scenario. Such application requires both the kinematic energy consumption of the robots and the ultra-reliable and low latency communication (URLLC) between the central controller and the robot swarm to be jointly optimized in real-time. The problem is formulated as a nonconvex optimization problem since the achievable rate and decoding error probability with short block-length are neither convex nor concave in bandwidth and transmit power. We propose a deep reinforcement learning (DRL) based approach that employs the deep deterministic policy gradient (DDPG) method and convolutional neural network (CNN) to achieve a stationary optimal control policy that consists of a number of continuous and discrete actions. Numerical results show that our proposed multi-agent DDPG algorithm achieves a performance close to the optimal baseline and outperforms the single-agent DDPG in terms of decoding error probability and energy efficiency.","PeriodicalId":200641,"journal":{"name":"2021 IEEE Global Communications Conference (GLOBECOM)","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126344800","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-12-01DOI: 10.1109/GLOBECOM46510.2021.9685091
M. Aloqaily, Ouns Bouachir, I. A. Ridhawi
Advanced services leveraged for future smart cities have played a significant role in the advancement of 5G networks towards the 6G vision. Interactive immersive applications are an example of those enabled services. Such applications allow for the interaction between multiple users in a 3D environment created by virtual presentations of real objects and participants using various technologies such as Virtual Reality (VR), Augmented Reality (AR), Extended Reality (XR), Digital Twin (DT) and holography. These applications require advanced computing models which allow for the processing of massive gathered amounts of data. Motions, gestures and object modification should be captured, added to the virtual environment, and shared with all the participants. Relying only on the cloud to process this data can cause significant delays. Therefore, a hybrid cloud/edge architecturewith an intelligent resource orchestration mechanism, that is able to allocate the available capacities efficiently is necessary. In this paper, a blockchain and federated learning-enabled predicted edge-resource allocation (FLP-RA) algorithm is introduced to manage the allocation of computing resources in B5G networks. It allows for smart edge nodes to train their local data and share it with other nodes to create a global estimation of future network loads. As such, nodes are able to make accurate decisions to distribute the available resources to provide the lowest computing delay.
{"title":"Blockchain and FL-based Network Resource Management for Interactive Immersive Services","authors":"M. Aloqaily, Ouns Bouachir, I. A. Ridhawi","doi":"10.1109/GLOBECOM46510.2021.9685091","DOIUrl":"https://doi.org/10.1109/GLOBECOM46510.2021.9685091","url":null,"abstract":"Advanced services leveraged for future smart cities have played a significant role in the advancement of 5G networks towards the 6G vision. Interactive immersive applications are an example of those enabled services. Such applications allow for the interaction between multiple users in a 3D environment created by virtual presentations of real objects and participants using various technologies such as Virtual Reality (VR), Augmented Reality (AR), Extended Reality (XR), Digital Twin (DT) and holography. These applications require advanced computing models which allow for the processing of massive gathered amounts of data. Motions, gestures and object modification should be captured, added to the virtual environment, and shared with all the participants. Relying only on the cloud to process this data can cause significant delays. Therefore, a hybrid cloud/edge architecturewith an intelligent resource orchestration mechanism, that is able to allocate the available capacities efficiently is necessary. In this paper, a blockchain and federated learning-enabled predicted edge-resource allocation (FLP-RA) algorithm is introduced to manage the allocation of computing resources in B5G networks. It allows for smart edge nodes to train their local data and share it with other nodes to create a global estimation of future network loads. As such, nodes are able to make accurate decisions to distribute the available resources to provide the lowest computing delay.","PeriodicalId":200641,"journal":{"name":"2021 IEEE Global Communications Conference (GLOBECOM)","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125795153","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}