Nabil Makarem, W. B. Diab, Imad Mougharbel, N. Malouch
The Constrained Application Protocol (CoAP) is a lightweight communication protocol designed by the Internet Engineering Task Force (IETF) for wireless sensor networks and Internet-of-Things (IoT) devices. The reliability mechanism in CoAP is based on retransmissions after timeout expiration and on an exponential backoff procedure which is designed to be simple and adapted to constrained devices. In this research work, we propose a new exact analytical model to analyze the performance of CoAP in lossy wireless networks modeled by the well-known Gilbert-Elliott two-state Markov process. We also show how to compute several performance metrics using closed form expressions such as the observed loss ratio, goodput, and the delay before success with a time complexity no more than O(r) with r is the maximum re-transmission limit. This study provides insights about improving CoAP recovery mechanism and highlights the properties -- including the limitations -- of CoAP. Also, it presents guidelines to tune CoAP parameters dynamically in order to adapt to network losses caused by interference and mobility. The model is validated using the realistic environment Cooja/Contiki OS where theoretical and experimental results match very well.
{"title":"A non-Hidden Markovian Modeling of the Reliability Scheme of the Constrained Application Protocol in Lossy Wireless Networks","authors":"Nabil Makarem, W. B. Diab, Imad Mougharbel, N. Malouch","doi":"10.1145/3551659.3559065","DOIUrl":"https://doi.org/10.1145/3551659.3559065","url":null,"abstract":"The Constrained Application Protocol (CoAP) is a lightweight communication protocol designed by the Internet Engineering Task Force (IETF) for wireless sensor networks and Internet-of-Things (IoT) devices. The reliability mechanism in CoAP is based on retransmissions after timeout expiration and on an exponential backoff procedure which is designed to be simple and adapted to constrained devices. In this research work, we propose a new exact analytical model to analyze the performance of CoAP in lossy wireless networks modeled by the well-known Gilbert-Elliott two-state Markov process. We also show how to compute several performance metrics using closed form expressions such as the observed loss ratio, goodput, and the delay before success with a time complexity no more than O(r) with r is the maximum re-transmission limit. This study provides insights about improving CoAP recovery mechanism and highlights the properties -- including the limitations -- of CoAP. Also, it presents guidelines to tune CoAP parameters dynamically in order to adapt to network losses caused by interference and mobility. The model is validated using the realistic environment Cooja/Contiki OS where theoretical and experimental results match very well.","PeriodicalId":423926,"journal":{"name":"Proceedings of the 25th International ACM Conference on Modeling Analysis and Simulation of Wireless and Mobile Systems","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130717468","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Smartphones and the signaling messages they emit allow third parties to learn about the owners' mobility. While Wi-Fi and Bluetooth signaling messages have been (mis)used for tracking individuals, there are also privacy-respecting uses: crowd sensing for estimating the number of people in an area and their dynamics, is one such example. However, the very useful countermeasures against individual tracking, most prominently MAC address randomization, also complicate crowd size estimation. In this paper, we present an online estimation algorithm that operates only on ephemeral MAC addresses and, if desired, signal strength information to distinguish relevant signals from background noise. We use measurements and simulations to calibrate our counting algorithm and collect numerous data sets which we use to explore the algorithm's performance in different scenarios.
{"title":"Characterizing Wi-Fi Probing Behavior for Privacy-Preserving Crowdsensing","authors":"Pegah Torkamandi, Ljubica Kärkkäinen, J. Ott","doi":"10.1145/3551659.3559039","DOIUrl":"https://doi.org/10.1145/3551659.3559039","url":null,"abstract":"Smartphones and the signaling messages they emit allow third parties to learn about the owners' mobility. While Wi-Fi and Bluetooth signaling messages have been (mis)used for tracking individuals, there are also privacy-respecting uses: crowd sensing for estimating the number of people in an area and their dynamics, is one such example. However, the very useful countermeasures against individual tracking, most prominently MAC address randomization, also complicate crowd size estimation. In this paper, we present an online estimation algorithm that operates only on ephemeral MAC addresses and, if desired, signal strength information to distinguish relevant signals from background noise. We use measurements and simulations to calibrate our counting algorithm and collect numerous data sets which we use to explore the algorithm's performance in different scenarios.","PeriodicalId":423926,"journal":{"name":"Proceedings of the 25th International ACM Conference on Modeling Analysis and Simulation of Wireless and Mobile Systems","volume":"46 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123840112","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ruslan Kain, Sara A. Elsayed, Y. Chen, H. Hassanein
Democratizing the edge by leveraging the prolific yet underutilized computational resources of end devices, referred to as Extreme Edge Devices (EEDs), can open a new edge computing tech market that is people-owned, democratically managed, and accessible/lucrative to all. Parallel computing at EEDs can also move the computing service much closer to end-users, which can help satisfy the stringent Quality-of-Service (QoS) requirements of delay-critical and/or data-intensive IoT applications. However, EEDs are heterogeneous user-owned devices, and are thus subject to a highly dynamic user access behavior (i.e., dynamic resource usage). This makes the process of determining the computational capability of EEDs increasingly challenging. Estimating the dynamic resource usage of EEDs (i.e., workers) has been mostly overlooked. The complexity of Machine Learning (ML)-based models renders them impractical for deployment at the edge for the purpose of such estimations. In this paper, we propose the Resource Usage Multi-step Prediction (RUMP) scheme to estimate the dynamic resource usage of workers over multiple steps ahead in a computationally efficient way while providing a relatively high prediction accuracy. Towards that end, RUMP exploits the use of the Hierarchical Dirichlet Process-Hidden Semi-Markov Model (HDP-HSMM) to estimate the dynamic resource usage of workers in EED-based computing paradigms. Extensive evaluations on a real testbed of heterogeneous workers for multi-step sizes show an 87.5% prediction accuracy for the starting point of 2-steps and coming to as little as a 16% average difference in prediction error compared to a representative of state-of-the-art ML-based schemes.
{"title":"Multi-step Prediction of Worker Resource Usage at the Extreme Edge","authors":"Ruslan Kain, Sara A. Elsayed, Y. Chen, H. Hassanein","doi":"10.1145/3551659.3559051","DOIUrl":"https://doi.org/10.1145/3551659.3559051","url":null,"abstract":"Democratizing the edge by leveraging the prolific yet underutilized computational resources of end devices, referred to as Extreme Edge Devices (EEDs), can open a new edge computing tech market that is people-owned, democratically managed, and accessible/lucrative to all. Parallel computing at EEDs can also move the computing service much closer to end-users, which can help satisfy the stringent Quality-of-Service (QoS) requirements of delay-critical and/or data-intensive IoT applications. However, EEDs are heterogeneous user-owned devices, and are thus subject to a highly dynamic user access behavior (i.e., dynamic resource usage). This makes the process of determining the computational capability of EEDs increasingly challenging. Estimating the dynamic resource usage of EEDs (i.e., workers) has been mostly overlooked. The complexity of Machine Learning (ML)-based models renders them impractical for deployment at the edge for the purpose of such estimations. In this paper, we propose the Resource Usage Multi-step Prediction (RUMP) scheme to estimate the dynamic resource usage of workers over multiple steps ahead in a computationally efficient way while providing a relatively high prediction accuracy. Towards that end, RUMP exploits the use of the Hierarchical Dirichlet Process-Hidden Semi-Markov Model (HDP-HSMM) to estimate the dynamic resource usage of workers in EED-based computing paradigms. Extensive evaluations on a real testbed of heterogeneous workers for multi-step sizes show an 87.5% prediction accuracy for the starting point of 2-steps and coming to as little as a 16% average difference in prediction error compared to a representative of state-of-the-art ML-based schemes.","PeriodicalId":423926,"journal":{"name":"Proceedings of the 25th International ACM Conference on Modeling Analysis and Simulation of Wireless and Mobile Systems","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128745600","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Solohaja Rabenjamina, Razvan Stanica, Oana-Teodora Iova, H. Rivano
Data collected from mobile phones or from motion detection sensors are regularly used as a proxy for user presence in networking studies. However, little attention was paid to the actual accuracy of these data sources, which present certain biases, in capturing actual human presence in a given geographical area. In this work, we conduct the first comparison between mobile phone data collected by an operator and human presence data collected by motion detection sensors in the same geographical area. Through a detailed spatio-temporal analysis, we show that a significant correlation exists between the two datasets, which can be seen as a cross validation of the two data sources. However, we also detect some significant differences at certain times and places, raising questions regarding the data used in certain studies in the literature. For example, we notice that the most important daily mobility peaks detected in mobile phone data are not actually detected by on ground sensors, or that the end of the work-day activities in the considered area is not synchronised between the two data sources. Our results allow to distinguish the metrics and the scenarios where user presence information is confirmed by both mobile phone and sensor data.
{"title":"Comparison of User Presence Information from Mobile Phone and Sensor Data","authors":"Solohaja Rabenjamina, Razvan Stanica, Oana-Teodora Iova, H. Rivano","doi":"10.1145/3551659.3559054","DOIUrl":"https://doi.org/10.1145/3551659.3559054","url":null,"abstract":"Data collected from mobile phones or from motion detection sensors are regularly used as a proxy for user presence in networking studies. However, little attention was paid to the actual accuracy of these data sources, which present certain biases, in capturing actual human presence in a given geographical area. In this work, we conduct the first comparison between mobile phone data collected by an operator and human presence data collected by motion detection sensors in the same geographical area. Through a detailed spatio-temporal analysis, we show that a significant correlation exists between the two datasets, which can be seen as a cross validation of the two data sources. However, we also detect some significant differences at certain times and places, raising questions regarding the data used in certain studies in the literature. For example, we notice that the most important daily mobility peaks detected in mobile phone data are not actually detected by on ground sensors, or that the end of the work-day activities in the considered area is not synchronised between the two data sources. Our results allow to distinguish the metrics and the scenarios where user presence information is confirmed by both mobile phone and sensor data.","PeriodicalId":423926,"journal":{"name":"Proceedings of the 25th International ACM Conference on Modeling Analysis and Simulation of Wireless and Mobile Systems","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127860346","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Majed Haddad, P. Więcek, Oussama Habachi, S. Perlaza, S. M. Shah
The water-filling algorithm is well known for providing optimal data rates in time varying wireless communication networks. In this paper, a perfectly coordinated water-filling game is considered, in which each user transmits only on the assigned carrier. Contrary to conventional algorithms, the main goal of the proposed algorithm (FEAT) is to achieve near optimal performance, while satisfying fairness constraints among different users. The key idea within FEAT is to minimize the ratio between the utilities of the best and the worst users. To achieve this goal, we devise an algorithm such that, at each iteration (channel assignment), a channel is assigned to a user, while ensuring that it does not lose much more than other users in the system. In this paper, we show that FEAT outperforms most of the existing related algorithms in many aspects, especially in interference-limited systems. Indeed, with FEAT, we can ensure a low complexity near-optimal, and fair solution. It is shown that the balance between being nearly globally optimal and good from an individual point of view seems hard to sustain with a significant number of users, hence adding robustness to the proposed algorithm.
{"title":"Fair Iterative Water-Filling Game for Multiple Access Channels","authors":"Majed Haddad, P. Więcek, Oussama Habachi, S. Perlaza, S. M. Shah","doi":"10.1145/3551659.3559038","DOIUrl":"https://doi.org/10.1145/3551659.3559038","url":null,"abstract":"The water-filling algorithm is well known for providing optimal data rates in time varying wireless communication networks. In this paper, a perfectly coordinated water-filling game is considered, in which each user transmits only on the assigned carrier. Contrary to conventional algorithms, the main goal of the proposed algorithm (FEAT) is to achieve near optimal performance, while satisfying fairness constraints among different users. The key idea within FEAT is to minimize the ratio between the utilities of the best and the worst users. To achieve this goal, we devise an algorithm such that, at each iteration (channel assignment), a channel is assigned to a user, while ensuring that it does not lose much more than other users in the system. In this paper, we show that FEAT outperforms most of the existing related algorithms in many aspects, especially in interference-limited systems. Indeed, with FEAT, we can ensure a low complexity near-optimal, and fair solution. It is shown that the balance between being nearly globally optimal and good from an individual point of view seems hard to sustain with a significant number of users, hence adding robustness to the proposed algorithm.","PeriodicalId":423926,"journal":{"name":"Proceedings of the 25th International ACM Conference on Modeling Analysis and Simulation of Wireless and Mobile Systems","volume":"36 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133705024","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The incorporation of real radio hardware and physical emulated radio links into higher layer network and protocol simulation studies has been a largely untouched area of research so far. The Split-Protocol-Stack Radio-in-the-Loop emulation combines pure discrete-event protocol simulation with a hardware-based radio link emulation. Since the basic techniques involve contrary time concepts, event communication between the two domains requires a rethink of scheduling and synchronization. With the Real-Time-Shift conservative synchronization and time compensation scheme, the simulator is decoupled from real-time constraints and limitations by introducing predetermined pause times for event execution. In this paper, we present the core synchronization and event scheduling approach allowing for scalable pseudo-real-time simulations with radio hardware in the loop. This enables discrete-event simulations for wireless host systems and networks with link-level emulation accuracy, accompanied by an overall high modeling flexibility.
{"title":"Real-Time-Shift: Pseudo-Real-Time Event Scheduling for the Split-Protocol-Stack Radio-in-the-Loop Emulation","authors":"Sebastian Boehm, H. Koenig","doi":"10.1145/3551659.3559057","DOIUrl":"https://doi.org/10.1145/3551659.3559057","url":null,"abstract":"The incorporation of real radio hardware and physical emulated radio links into higher layer network and protocol simulation studies has been a largely untouched area of research so far. The Split-Protocol-Stack Radio-in-the-Loop emulation combines pure discrete-event protocol simulation with a hardware-based radio link emulation. Since the basic techniques involve contrary time concepts, event communication between the two domains requires a rethink of scheduling and synchronization. With the Real-Time-Shift conservative synchronization and time compensation scheme, the simulator is decoupled from real-time constraints and limitations by introducing predetermined pause times for event execution. In this paper, we present the core synchronization and event scheduling approach allowing for scalable pseudo-real-time simulations with radio hardware in the loop. This enables discrete-event simulations for wireless host systems and networks with link-level emulation accuracy, accompanied by an overall high modeling flexibility.","PeriodicalId":423926,"journal":{"name":"Proceedings of the 25th International ACM Conference on Modeling Analysis and Simulation of Wireless and Mobile Systems","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130143574","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We introduce, design, and evaluate a set of universal receiver beamforming techniques. Our approach and system DEFORM, a Deep Learning (DL)-based RX beamforming achieves significant gain for multi-antenna RF receivers while being agnostic to the transmitted signal features (e.g., modulation or bandwidth). It is well known that combining coherent RF signals from multiple antennas results in a beamforming gain proportional to the number of receiving elements. However in practice, this approach heavily relies on explicit channel estimation techniques, which are link specific and require significant communication overhead to be transmitted to the receiver. DEFORM addresses this challenge by leveraging Convolutional Neural Network to estimate the channel characteristics in particular the relative phase to antenna elements. It is specifically designed to address the unique features of wireless signals complex samples, such as the ambiguous 2π phase discontinuity and the high sensitivity of the link Bit Error Rate. The channel prediction is subsequently used in the Maximum Ratio Combining algorithm to achieve an optimal combination of the received signals. While being trained on a fixed, basic RF settings, we show that DEFORM's DL model is universal, achieving up to 3 dB of SNR gain for a two-antenna receiver in extensive evaluation demonstrating various settings of modulations and bandwidths.
{"title":"Universal Beamforming: A Deep RFML Approach","authors":"H. Nguyen, G. Noubir","doi":"10.1145/3551659.3559041","DOIUrl":"https://doi.org/10.1145/3551659.3559041","url":null,"abstract":"We introduce, design, and evaluate a set of universal receiver beamforming techniques. Our approach and system DEFORM, a Deep Learning (DL)-based RX beamforming achieves significant gain for multi-antenna RF receivers while being agnostic to the transmitted signal features (e.g., modulation or bandwidth). It is well known that combining coherent RF signals from multiple antennas results in a beamforming gain proportional to the number of receiving elements. However in practice, this approach heavily relies on explicit channel estimation techniques, which are link specific and require significant communication overhead to be transmitted to the receiver. DEFORM addresses this challenge by leveraging Convolutional Neural Network to estimate the channel characteristics in particular the relative phase to antenna elements. It is specifically designed to address the unique features of wireless signals complex samples, such as the ambiguous 2π phase discontinuity and the high sensitivity of the link Bit Error Rate. The channel prediction is subsequently used in the Maximum Ratio Combining algorithm to achieve an optimal combination of the received signals. While being trained on a fixed, basic RF settings, we show that DEFORM's DL model is universal, achieving up to 3 dB of SNR gain for a two-antenna receiver in extensive evaluation demonstrating various settings of modulations and bandwidths.","PeriodicalId":423926,"journal":{"name":"Proceedings of the 25th International ACM Conference on Modeling Analysis and Simulation of Wireless and Mobile Systems","volume":"55 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126447846","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
For the widely used Bluetooth Low-Energy (BLE) neighbor discovery, the parameter configuration of neighbor discovery directly decides the results of the trade-off between discovery latency and power consumption. Therefore, it requires evaluating whether any given parameter configuration meets the demands. The existing solutions, however, are far from satisfactory due to unsolved issues. In this paper, we propose Blender, a simulation framework that produces a determined and full probabilistic distribution of discovery latency for a given parameter configuration. To capture the key features in practice, Blender provides adaption to the stochastic factors such as the channel collision and the random behavior of the advertiser. Evaluation results show that, compared with the state-of-art simulators, Blender converges closer to the traces from the Android-based realistic estimations. Blender can be used to guide parameter configuration for BLE neighbor discovery systems where the trade-off between discovery latency and power consumption is of critical importance.
{"title":"Blender: Toward Practical Simulation Framework for BLE Neighbor Discovery","authors":"Yukuan Ding, Tong Li, Jiaxin Liang, Danfeng Wang","doi":"10.1145/3551659.3559052","DOIUrl":"https://doi.org/10.1145/3551659.3559052","url":null,"abstract":"For the widely used Bluetooth Low-Energy (BLE) neighbor discovery, the parameter configuration of neighbor discovery directly decides the results of the trade-off between discovery latency and power consumption. Therefore, it requires evaluating whether any given parameter configuration meets the demands. The existing solutions, however, are far from satisfactory due to unsolved issues. In this paper, we propose Blender, a simulation framework that produces a determined and full probabilistic distribution of discovery latency for a given parameter configuration. To capture the key features in practice, Blender provides adaption to the stochastic factors such as the channel collision and the random behavior of the advertiser. Evaluation results show that, compared with the state-of-art simulators, Blender converges closer to the traces from the Android-based realistic estimations. Blender can be used to guide parameter configuration for BLE neighbor discovery systems where the trade-off between discovery latency and power consumption is of critical importance.","PeriodicalId":423926,"journal":{"name":"Proceedings of the 25th International ACM Conference on Modeling Analysis and Simulation of Wireless and Mobile Systems","volume":"53 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133423802","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Peng Liu, Chen Shen, Chunmei Liu, Fernando J. Cintrón, Lyutianyang Zhang, Liu Cao, R. Rouil, Sumit Roy
Since the Third Generation Partnership Project (3GPP) specified 5G New Radio (NR) sidelink in Release 16, researchers have been expressing increasing interest in sidelink in various research areas, such as Proximity Services (ProSe) and Vehicle-to-Everything (V2X). It is essential to provide researchers with a comprehensive simulation platform that allows for extensive NR sidelink link-level evaluations. In this paper, we introduce the first publicly accessible 5G NR link-level simulator that supports sidelink. Our MATLAB-based simulator complies with the 3GPP 5G NR sidelink standards, and offers flexible control over various Physical Layer (PHY) configurations. It will facilitate researcher's exploration in NR sidelink with a friendly access to the key network parameters and great potential of customized simulations on algorithm developments and performance evaluations. This paper also provides several initial link-level simulation results on sidelink using the developed simulator.
{"title":"5G New Radio Sidelink Link-Level Simulator and Performance Analysis","authors":"Peng Liu, Chen Shen, Chunmei Liu, Fernando J. Cintrón, Lyutianyang Zhang, Liu Cao, R. Rouil, Sumit Roy","doi":"10.1145/3551659.3559049","DOIUrl":"https://doi.org/10.1145/3551659.3559049","url":null,"abstract":"Since the Third Generation Partnership Project (3GPP) specified 5G New Radio (NR) sidelink in Release 16, researchers have been expressing increasing interest in sidelink in various research areas, such as Proximity Services (ProSe) and Vehicle-to-Everything (V2X). It is essential to provide researchers with a comprehensive simulation platform that allows for extensive NR sidelink link-level evaluations. In this paper, we introduce the first publicly accessible 5G NR link-level simulator that supports sidelink. Our MATLAB-based simulator complies with the 3GPP 5G NR sidelink standards, and offers flexible control over various Physical Layer (PHY) configurations. It will facilitate researcher's exploration in NR sidelink with a friendly access to the key network parameters and great potential of customized simulations on algorithm developments and performance evaluations. This paper also provides several initial link-level simulation results on sidelink using the developed simulator.","PeriodicalId":423926,"journal":{"name":"Proceedings of the 25th International ACM Conference on Modeling Analysis and Simulation of Wireless and Mobile Systems","volume":"65 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131121847","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The millimeter-wave (mmWave) band with large antenna arrays and dense base station deployments has become the prime candidate for 5G mobile systems and key enabler for ultra-reliable low-latency communications (URLLC). In this paper, we propose an approach to estimating the optimal cell sizes of 5G networks that support URLLC services by combining both physical and data link layers, leveraging concepts from stochastic geometry and queuing theory. Furthermore, the impacts of the densification of base stations on the average blocking probability, which are of practical interest, are investigated with numerical results. The results show that the signal-to-noise-and-interference ratio (SINR) coverage probability and the average blocking probability achieve optimal values at different cell sizes. Moreover, the differences between the two types of optimal values become more significant with higher SINR thresholds. Our results suggest that traditional SINR-based approach for cell sizing will cause over-provisioning of base stations and significantly higher costs. Specifically, we share the insight that the interactions between SINR at physical layer and retransmission at link layer contribute to varying cost saving.
{"title":"Optimizing Cell Sizes for Ultra-Reliable Low-Latency Communications in 5G Wireless Networks","authors":"Changcheng Huang, Nhat Hieu Le","doi":"10.1145/3551659.3559056","DOIUrl":"https://doi.org/10.1145/3551659.3559056","url":null,"abstract":"The millimeter-wave (mmWave) band with large antenna arrays and dense base station deployments has become the prime candidate for 5G mobile systems and key enabler for ultra-reliable low-latency communications (URLLC). In this paper, we propose an approach to estimating the optimal cell sizes of 5G networks that support URLLC services by combining both physical and data link layers, leveraging concepts from stochastic geometry and queuing theory. Furthermore, the impacts of the densification of base stations on the average blocking probability, which are of practical interest, are investigated with numerical results. The results show that the signal-to-noise-and-interference ratio (SINR) coverage probability and the average blocking probability achieve optimal values at different cell sizes. Moreover, the differences between the two types of optimal values become more significant with higher SINR thresholds. Our results suggest that traditional SINR-based approach for cell sizing will cause over-provisioning of base stations and significantly higher costs. Specifically, we share the insight that the interactions between SINR at physical layer and retransmission at link layer contribute to varying cost saving.","PeriodicalId":423926,"journal":{"name":"Proceedings of the 25th International ACM Conference on Modeling Analysis and Simulation of Wireless and Mobile Systems","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131937618","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}