Pub Date : 2022-06-30DOI: 10.1109/ISCC55528.2022.9912888
Nithish Velagala, Leandros A. Maglaras, N. Ayres, S. Moschoyiannis, L. Tassiulas
SNE2EE is a messaging service that protects indi-viduals in each and every stage of the data transfer process: creation, transmission, and reception. The aim of SNE2EE is to protect user communications not only when their date is being transported to another user via secure ports/protocols, but also while they are being created.
{"title":"Enhancing Privacy of Online Chat Apps Utilising Secure Node End-to-End Encryption (SNE2EE)","authors":"Nithish Velagala, Leandros A. Maglaras, N. Ayres, S. Moschoyiannis, L. Tassiulas","doi":"10.1109/ISCC55528.2022.9912888","DOIUrl":"https://doi.org/10.1109/ISCC55528.2022.9912888","url":null,"abstract":"SNE2EE is a messaging service that protects indi-viduals in each and every stage of the data transfer process: creation, transmission, and reception. The aim of SNE2EE is to protect user communications not only when their date is being transported to another user via secure ports/protocols, but also while they are being created.","PeriodicalId":309606,"journal":{"name":"2022 IEEE Symposium on Computers and Communications (ISCC)","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-06-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115310189","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-06-30DOI: 10.1109/ISCC55528.2022.9913058
Giuseppe Aceto, Ciro Guida, Antonio Montieri, V. Persico, A. Pescapé
The generation of synthetic network traffic is necessary to several fundamental networking activities, ranging from device testing to path monitoring, with implications on security and management. While literature focused on high-rate traffic generation, for many use cases accurate traffic generation is of importance instead. These scenarios have expanded with Network Function Virtualization, Software Defined Networking, and Cloud applications, which introduce further causes for alterations of generated traffic. Such causes are described and experimentally evaluated in this work, where the generation accuracy of D-ITG, an open-source software generator, is investigated in a virtualized environment. A definition of accuracy in terms of Mean Absolute Percentage Error of the sequences of Payload Lengths (PLs) and Inter-Departure Times (IDTs) is exploited to this end. The tool is found accurate for all PLs and for IDTs greater than one millisecond, and after the correction of a systematic error, also from 100 us.
{"title":"A First Look at Accurate Network Traffic Generation in Virtual Environments","authors":"Giuseppe Aceto, Ciro Guida, Antonio Montieri, V. Persico, A. Pescapé","doi":"10.1109/ISCC55528.2022.9913058","DOIUrl":"https://doi.org/10.1109/ISCC55528.2022.9913058","url":null,"abstract":"The generation of synthetic network traffic is necessary to several fundamental networking activities, ranging from device testing to path monitoring, with implications on security and management. While literature focused on high-rate traffic generation, for many use cases accurate traffic generation is of importance instead. These scenarios have expanded with Network Function Virtualization, Software Defined Networking, and Cloud applications, which introduce further causes for alterations of generated traffic. Such causes are described and experimentally evaluated in this work, where the generation accuracy of D-ITG, an open-source software generator, is investigated in a virtualized environment. A definition of accuracy in terms of Mean Absolute Percentage Error of the sequences of Payload Lengths (PLs) and Inter-Departure Times (IDTs) is exploited to this end. The tool is found accurate for all PLs and for IDTs greater than one millisecond, and after the correction of a systematic error, also from 100 us.","PeriodicalId":309606,"journal":{"name":"2022 IEEE Symposium on Computers and Communications (ISCC)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-06-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125159136","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-06-30DOI: 10.1109/ISCC55528.2022.9912968
Alessio Catalfamo, A. Celesti, M. Fazio, Giovanni Randazzo, M. Villari
Recently, both scientific and industrial communities have highlighted the importance to run Machine Learning (ML) applications on Edge computing closer to the end-user and to managed raw data, for many reasons including quality of service (QoS) and security. However, due to the limited computing, storage and network resources at the Edge, several ML algorithms have been re-designed to be deployed on Edge devices. In this paper, we want to explore in detail Edge Federation for supporting ML-based solutions. In particular, we present a new platform for the deployment and the management of complex services at the Edge. It provides an interface that allows us to arrange applications as a collection of interconnected lightweight loosely-coupled services (i.e., microservices) and enables their management across Federated Edge devices through the abstraction of the underlying clusters of physical devices. The proposed solution is validated by a use case related to video analysis in the morphological field.
{"title":"A Platform for Federated Learning on the Edge: a Video Analysis Use Case","authors":"Alessio Catalfamo, A. Celesti, M. Fazio, Giovanni Randazzo, M. Villari","doi":"10.1109/ISCC55528.2022.9912968","DOIUrl":"https://doi.org/10.1109/ISCC55528.2022.9912968","url":null,"abstract":"Recently, both scientific and industrial communities have highlighted the importance to run Machine Learning (ML) applications on Edge computing closer to the end-user and to managed raw data, for many reasons including quality of service (QoS) and security. However, due to the limited computing, storage and network resources at the Edge, several ML algorithms have been re-designed to be deployed on Edge devices. In this paper, we want to explore in detail Edge Federation for supporting ML-based solutions. In particular, we present a new platform for the deployment and the management of complex services at the Edge. It provides an interface that allows us to arrange applications as a collection of interconnected lightweight loosely-coupled services (i.e., microservices) and enables their management across Federated Edge devices through the abstraction of the underlying clusters of physical devices. The proposed solution is validated by a use case related to video analysis in the morphological field.","PeriodicalId":309606,"journal":{"name":"2022 IEEE Symposium on Computers and Communications (ISCC)","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-06-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123321338","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-06-30DOI: 10.1109/ISCC55528.2022.9912824
Jiaqi Kang, Huiran Yang, Y. Zhang, Yueyue Dai, Mengqi Zhan, Weiping Wang
The cyber security situation is not optimistic in recent years due to the rapid growth of security threats. What's more worrying is that threats are tending to be more sophis-ticated, which poses challenges to attack activity analysis. It is quite important for analysts to understand attack activities from a holistic perspective, rather than just pay attention to alerts. Currently, the attack activity analysis generally relies on human resources, which is a heavy workload for manual analysis. Besides, it's difficult to achieve high detection accuracy due to the missing and false-positive alerts. In this paper, we propose a new framework, ActDetector, to detect attack activities automatically from the raw Network Intrusion Detection System (NIDS) alerts, which will greatly reduce the workload of security analysts. We extract attack phase descriptions from alerts and embed attack activity descriptions to obtain their numerical expression. Finally, we use a temporal-sequence-based model to detect potential attack activities. We evaluate ActDetector with three datasets. Experimental results demonstrate that ActDetector can detect attack activities from the raw NIDS alerts with an average of 94.8% Precision, 95.0% Recall, and 94.6% F1-score.
{"title":"ActDetector: A Sequence-based Framework for Network Attack Activity Detection","authors":"Jiaqi Kang, Huiran Yang, Y. Zhang, Yueyue Dai, Mengqi Zhan, Weiping Wang","doi":"10.1109/ISCC55528.2022.9912824","DOIUrl":"https://doi.org/10.1109/ISCC55528.2022.9912824","url":null,"abstract":"The cyber security situation is not optimistic in recent years due to the rapid growth of security threats. What's more worrying is that threats are tending to be more sophis-ticated, which poses challenges to attack activity analysis. It is quite important for analysts to understand attack activities from a holistic perspective, rather than just pay attention to alerts. Currently, the attack activity analysis generally relies on human resources, which is a heavy workload for manual analysis. Besides, it's difficult to achieve high detection accuracy due to the missing and false-positive alerts. In this paper, we propose a new framework, ActDetector, to detect attack activities automatically from the raw Network Intrusion Detection System (NIDS) alerts, which will greatly reduce the workload of security analysts. We extract attack phase descriptions from alerts and embed attack activity descriptions to obtain their numerical expression. Finally, we use a temporal-sequence-based model to detect potential attack activities. We evaluate ActDetector with three datasets. Experimental results demonstrate that ActDetector can detect attack activities from the raw NIDS alerts with an average of 94.8% Precision, 95.0% Recall, and 94.6% F1-score.","PeriodicalId":309606,"journal":{"name":"2022 IEEE Symposium on Computers and Communications (ISCC)","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-06-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125569346","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-06-30DOI: 10.1109/ISCC55528.2022.9912823
Hind Boukhairat, M. Koulali
Spectrum sensing is a critical component of Cognitive Internet of Things. It allows Secondary Users(SUs) to access underutilized frequency bands licensed to Primary Users (PUs) opportunistically without causing harmful interference to them. How-ever, accurate individual spectrum sensing solutions are complex to deploy. Thus, Cooperative Spectrum Sensing (CSS) techniques have flourished. These techniques combine individual sensing through a weighting mechanism at a fusion center to assess the channel status. The fusion process depends heavily on the indi-vidual detection thresholds at each SU and the weights attributed to their sensing results by the Fusion Center. In this paper, we propose to use Deep Neural Net-work to compute the optimal energy detection thresh-old and fusion weights. Our goal is to develop a solution that optimally adapts to the time-varying wireless channel conditions. Furthermore, our DNN-based so-lution eliminates the need to solve hard optimization problems, thus significantly reducing computational complexity, especially in large networks.
{"title":"Deep-Learning for Cooperative Spectrum Sensing Optimization in Cognitive Internet of Things","authors":"Hind Boukhairat, M. Koulali","doi":"10.1109/ISCC55528.2022.9912823","DOIUrl":"https://doi.org/10.1109/ISCC55528.2022.9912823","url":null,"abstract":"Spectrum sensing is a critical component of Cognitive Internet of Things. It allows Secondary Users(SUs) to access underutilized frequency bands licensed to Primary Users (PUs) opportunistically without causing harmful interference to them. How-ever, accurate individual spectrum sensing solutions are complex to deploy. Thus, Cooperative Spectrum Sensing (CSS) techniques have flourished. These techniques combine individual sensing through a weighting mechanism at a fusion center to assess the channel status. The fusion process depends heavily on the indi-vidual detection thresholds at each SU and the weights attributed to their sensing results by the Fusion Center. In this paper, we propose to use Deep Neural Net-work to compute the optimal energy detection thresh-old and fusion weights. Our goal is to develop a solution that optimally adapts to the time-varying wireless channel conditions. Furthermore, our DNN-based so-lution eliminates the need to solve hard optimization problems, thus significantly reducing computational complexity, especially in large networks.","PeriodicalId":309606,"journal":{"name":"2022 IEEE Symposium on Computers and Communications (ISCC)","volume":"392 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-06-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116523558","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-06-30DOI: 10.1109/ISCC55528.2022.9912769
Wei Gao, Meihong Yang, Wei Zhang, Libin Liu
Channel estimation is important for orthogonal frequency division multiplexing (OFDM) in current wireless communication systems. Prevalent channel estimation algorithms, however, cannot be widely deployed due to some practical reasons, such as poor robustness and high computational complexity. To solve the problems for OFDM systems, we propose a new channel estimation scheme with a fine-designed deep learning model, called RRDBNet. RRDBNet can be trained easily while maintaining the advantages of residual learning and increasing the structure capacity, by combining the multi-level residual network and dense links. Our simulation results show that RRDBNet outperforms the traditional least-square algorithm and existing DL-based super-resolution schemes, which ranges from 0.5 to 1dB at low SNR and from 2 to 3dB at high SNR. Besides, in terms of the number of pilots, RRDBNet is also superior to existing schemes and approaches LMMSE.
{"title":"Efficient OFDM Channel Estimation with RRDBNet","authors":"Wei Gao, Meihong Yang, Wei Zhang, Libin Liu","doi":"10.1109/ISCC55528.2022.9912769","DOIUrl":"https://doi.org/10.1109/ISCC55528.2022.9912769","url":null,"abstract":"Channel estimation is important for orthogonal frequency division multiplexing (OFDM) in current wireless communication systems. Prevalent channel estimation algorithms, however, cannot be widely deployed due to some practical reasons, such as poor robustness and high computational complexity. To solve the problems for OFDM systems, we propose a new channel estimation scheme with a fine-designed deep learning model, called RRDBNet. RRDBNet can be trained easily while maintaining the advantages of residual learning and increasing the structure capacity, by combining the multi-level residual network and dense links. Our simulation results show that RRDBNet outperforms the traditional least-square algorithm and existing DL-based super-resolution schemes, which ranges from 0.5 to 1dB at low SNR and from 2 to 3dB at high SNR. Besides, in terms of the number of pilots, RRDBNet is also superior to existing schemes and approaches LMMSE.","PeriodicalId":309606,"journal":{"name":"2022 IEEE Symposium on Computers and Communications (ISCC)","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-06-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121988993","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-06-30DOI: 10.1109/ISCC55528.2022.9912465
Wisam Abbasi, Paolo Mori, A. Saracino, V. Frascolla
This paper proposes a novel approach for privacy preserving face recognition aimed to formally define a trade-off optimization criterion between data privacy and algorithm accuracy. In our methodology, real world face images are anonymized with Gaussian blurring for privacy preservation. The anonymized images are processed for face detection, face alignment, face representation, and face verification. The proposed methodology has been validated with a set of experiments on a well known dataset and three face recognition classifiers. The results demonstrate the effectiveness of our approach to correctly verify face images with different levels of privacy and results accuracy, and to maximize privacy with the least negative impact on face detection and face verification accuracy.
{"title":"Privacy vs Accuracy Trade-Off in Privacy Aware Face Recognition in Smart Systems","authors":"Wisam Abbasi, Paolo Mori, A. Saracino, V. Frascolla","doi":"10.1109/ISCC55528.2022.9912465","DOIUrl":"https://doi.org/10.1109/ISCC55528.2022.9912465","url":null,"abstract":"This paper proposes a novel approach for privacy preserving face recognition aimed to formally define a trade-off optimization criterion between data privacy and algorithm accuracy. In our methodology, real world face images are anonymized with Gaussian blurring for privacy preservation. The anonymized images are processed for face detection, face alignment, face representation, and face verification. The proposed methodology has been validated with a set of experiments on a well known dataset and three face recognition classifiers. The results demonstrate the effectiveness of our approach to correctly verify face images with different levels of privacy and results accuracy, and to maximize privacy with the least negative impact on face detection and face verification accuracy.","PeriodicalId":309606,"journal":{"name":"2022 IEEE Symposium on Computers and Communications (ISCC)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-06-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129498519","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-06-30DOI: 10.1109/ISCC55528.2022.9912933
Tibério Baptista, Rui Jesus, Luís Bastião Silva, C. Costa
The use of digital imaging in medicine has become a cornerstone of modern diagnosis and treatment processes. The new technologies available in this ecosystem allowed healthcare institutions to improve their workflows, data access, sharing, and visualization using standardized formats. The migration of these services to the cloud enables a remote diagnostic environment, where professionals can review the studies remotely and engage in collaborative sessions. Despite the advantages of cloud-ready environments, their adoption has been slowed down by the demanding scenario high-resolution medical images pose. Some studies can have several gigabytes of data that need to be managed and consumed in the network. In this context, performance constraints of the software platforms can result in severe denial of clinical service. This work proposes a highly scalable cloud platform for extreme medical imaging scenarios. It provides scalability with auto-scaling mechanisms that allow dynamic adjustment of computational resources according to the service load.
{"title":"Scalable Digital Pathology Platform Over Standard Cloud Native Technologies","authors":"Tibério Baptista, Rui Jesus, Luís Bastião Silva, C. Costa","doi":"10.1109/ISCC55528.2022.9912933","DOIUrl":"https://doi.org/10.1109/ISCC55528.2022.9912933","url":null,"abstract":"The use of digital imaging in medicine has become a cornerstone of modern diagnosis and treatment processes. The new technologies available in this ecosystem allowed healthcare institutions to improve their workflows, data access, sharing, and visualization using standardized formats. The migration of these services to the cloud enables a remote diagnostic environment, where professionals can review the studies remotely and engage in collaborative sessions. Despite the advantages of cloud-ready environments, their adoption has been slowed down by the demanding scenario high-resolution medical images pose. Some studies can have several gigabytes of data that need to be managed and consumed in the network. In this context, performance constraints of the software platforms can result in severe denial of clinical service. This work proposes a highly scalable cloud platform for extreme medical imaging scenarios. It provides scalability with auto-scaling mechanisms that allow dynamic adjustment of computational resources according to the service load.","PeriodicalId":309606,"journal":{"name":"2022 IEEE Symposium on Computers and Communications (ISCC)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-06-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129613999","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-06-30DOI: 10.1109/ISCC55528.2022.9912792
M. Abdulla, Audrey Queudet, M. Chetto, Lamia Belouaer
Energy harvesting is an emerging technology that enhances the lifetime of Internet-of- Things (loT) applications. Satisfying real-time requirements for these systems is challenging. Dedicated real-time schedulers integrating both timing and energy constraints are required, such as the ED- H scheduling algorithm[l]. However, this algorithm has been proved to be optimal for independent tasks only (i.e., without considering any shared resources), thus preventing its confident deployment into computing infrastructures in which tasks are mostly interdependent. In this paper, we first derive worst-case blocking times and worst-case blocking energy for tasks sharing resources managed by the DPCP protocol[2] and scheduled under the ED- H scheme. Then, we provide a sufficient schedulability test for ED-H-DPCP guaranteeing off-line that both timing and energy constraints will be satisfied, even in the presence of shared resources.
{"title":"Real-time Resource Management in Smart Energy-Harvesting Systems","authors":"M. Abdulla, Audrey Queudet, M. Chetto, Lamia Belouaer","doi":"10.1109/ISCC55528.2022.9912792","DOIUrl":"https://doi.org/10.1109/ISCC55528.2022.9912792","url":null,"abstract":"Energy harvesting is an emerging technology that enhances the lifetime of Internet-of- Things (loT) applications. Satisfying real-time requirements for these systems is challenging. Dedicated real-time schedulers integrating both timing and energy constraints are required, such as the ED- H scheduling algorithm[l]. However, this algorithm has been proved to be optimal for independent tasks only (i.e., without considering any shared resources), thus preventing its confident deployment into computing infrastructures in which tasks are mostly interdependent. In this paper, we first derive worst-case blocking times and worst-case blocking energy for tasks sharing resources managed by the DPCP protocol[2] and scheduled under the ED- H scheme. Then, we provide a sufficient schedulability test for ED-H-DPCP guaranteeing off-line that both timing and energy constraints will be satisfied, even in the presence of shared resources.","PeriodicalId":309606,"journal":{"name":"2022 IEEE Symposium on Computers and Communications (ISCC)","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-06-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129061954","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-06-30DOI: 10.1109/ISCC55528.2022.9912837
Y. Yao, Hao Zhou, M. Erol-Kantarci
Millimeter Wave (mmWave) is an important part of 5G new radio (NR), in which highly directional beams are adapted to compensate for the substantial propagation loss based on UE locations. However, the location information may have some errors such as GPS errors. In any case, some uncertainty, and localization error is unavoidable in most settings. Applying these distorted locations for clustering will increase the error of beam management. Meanwhile, the traffic demand may change dynamically in the wireless environment. Therefore, a scheme that can handle both the uncertainty of localization and dynamic radio resource allocation is needed. In this paper, we propose a UK-means-based clustering and deep reinforcement learning-based resource allocation algorithm (UK-DRL) for radio resource allocation and beam management in 5G mm Wave networks. We first apply UK-means as the clustering algorithm to mitigate the localization uncertainty, then deep reinforcement learning (DRL) is adopted to dynamically allocate radio resources. Finally, we compare the UK-DRL with K-means-based clustering and DRL-based resource allocation algorithm (K-DRL), the simulations show that our proposed UK-DRL-based method achieves 150% higher throughput and 61.5% lower delay compared with K-DRL when traffic load is 4Mbps.
{"title":"Deep Reinforcement Learning-based Radio Resource Allocation and Beam Management under Location Uncertainty in 5G mm Wave Networks","authors":"Y. Yao, Hao Zhou, M. Erol-Kantarci","doi":"10.1109/ISCC55528.2022.9912837","DOIUrl":"https://doi.org/10.1109/ISCC55528.2022.9912837","url":null,"abstract":"Millimeter Wave (mmWave) is an important part of 5G new radio (NR), in which highly directional beams are adapted to compensate for the substantial propagation loss based on UE locations. However, the location information may have some errors such as GPS errors. In any case, some uncertainty, and localization error is unavoidable in most settings. Applying these distorted locations for clustering will increase the error of beam management. Meanwhile, the traffic demand may change dynamically in the wireless environment. Therefore, a scheme that can handle both the uncertainty of localization and dynamic radio resource allocation is needed. In this paper, we propose a UK-means-based clustering and deep reinforcement learning-based resource allocation algorithm (UK-DRL) for radio resource allocation and beam management in 5G mm Wave networks. We first apply UK-means as the clustering algorithm to mitigate the localization uncertainty, then deep reinforcement learning (DRL) is adopted to dynamically allocate radio resources. Finally, we compare the UK-DRL with K-means-based clustering and DRL-based resource allocation algorithm (K-DRL), the simulations show that our proposed UK-DRL-based method achieves 150% higher throughput and 61.5% lower delay compared with K-DRL when traffic load is 4Mbps.","PeriodicalId":309606,"journal":{"name":"2022 IEEE Symposium on Computers and Communications (ISCC)","volume":"196 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-06-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129202662","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}