Pub Date : 2019-11-01DOI: 10.1109/LATINCOM48065.2019.8937963
Flávio Brito, M. Berg, Chenguang Lu, Leonardo Ramalho, Ilan Sousa, A. Klautau
In the C-RAN architecture, there is a very high requirement of data rate for the fronthaul due to the characteristics and the high number of signals. One of the solutions relies on compression techniques to alleviate this requirement. Therefore, in this work, we propose a compression technique based on Trellis Coded Quantization. We use a resampling of 2/3, block scaling, TCQ quantization, and entropy coding. The results show that improves EVM performance in comparison with the scalar quantization and presents a lower computational cost than vector quantization.
{"title":"A Fronthaul Signal Compression Method Based on Trellis Coded Quantization","authors":"Flávio Brito, M. Berg, Chenguang Lu, Leonardo Ramalho, Ilan Sousa, A. Klautau","doi":"10.1109/LATINCOM48065.2019.8937963","DOIUrl":"https://doi.org/10.1109/LATINCOM48065.2019.8937963","url":null,"abstract":"In the C-RAN architecture, there is a very high requirement of data rate for the fronthaul due to the characteristics and the high number of signals. One of the solutions relies on compression techniques to alleviate this requirement. Therefore, in this work, we propose a compression technique based on Trellis Coded Quantization. We use a resampling of 2/3, block scaling, TCQ quantization, and entropy coding. The results show that improves EVM performance in comparison with the scalar quantization and presents a lower computational cost than vector quantization.","PeriodicalId":120312,"journal":{"name":"2019 IEEE Latin-American Conference on Communications (LATINCOM)","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114674286","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-11-01DOI: 10.1109/LATINCOM48065.2019.8937972
B. Martins, D. M. F. Mattos, N. Fernandes, D. Muchaluat-Saade, A. Vieira, E. F. Silva
The software-defined networking paradigm adds flexibility to network management as it allows the policy application in fined-grained flow level. However, the traditional definition of flow disregards user identification credentials. Thus, Identity Management in software-defined networking is a current challenge. In this paper, we propose an access control architecture for software-defined networking, based on ITU X.812 standard and implemented on AuthFlow authentication framework. The proposed architecture integrates AuthFlow with an attribute repository that maps network policies to user attributes. The proposal supports its integration with identity federation, and we evaluate it under a role-based access control model. The evaluated use case is a service differentiation policy according to the role of each user. The evaluation results demonstrate the correct application of the quality of service according to the role of the flow target user.
{"title":"An Extensible Access Control Architecture for Software Defined Networks based on X.812","authors":"B. Martins, D. M. F. Mattos, N. Fernandes, D. Muchaluat-Saade, A. Vieira, E. F. Silva","doi":"10.1109/LATINCOM48065.2019.8937972","DOIUrl":"https://doi.org/10.1109/LATINCOM48065.2019.8937972","url":null,"abstract":"The software-defined networking paradigm adds flexibility to network management as it allows the policy application in fined-grained flow level. However, the traditional definition of flow disregards user identification credentials. Thus, Identity Management in software-defined networking is a current challenge. In this paper, we propose an access control architecture for software-defined networking, based on ITU X.812 standard and implemented on AuthFlow authentication framework. The proposed architecture integrates AuthFlow with an attribute repository that maps network policies to user attributes. The proposal supports its integration with identity federation, and we evaluate it under a role-based access control model. The evaluated use case is a service differentiation policy according to the role of each user. The evaluation results demonstrate the correct application of the quality of service according to the role of the flow target user.","PeriodicalId":120312,"journal":{"name":"2019 IEEE Latin-American Conference on Communications (LATINCOM)","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114260343","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-11-01DOI: 10.1109/LATINCOM48065.2019.8937863
I. Leal, M. Alencar, W. Lopes
This paper presents a modeling for Multiple Input Multiple Output (MIMO) system considering the mutual coupling between antennas array elements. The conventional-mutual-impedance method (CMIM) and the receiving-mutual-impedance method (RMIM) are considered due to their and good approximation of real models. It presents a lower limit for the distance among the antenna elements in which the mutual coupling effect is minimized and also the importance of the CMIM and RMIM methods in MIMO systems projects. Simulation results show an improvement in the channel capacity performance of 9% for the MIMO 3×3 system.
{"title":"Particle Swarm Optimization Applied to Control of Mutual Coupling in MIMO Systems","authors":"I. Leal, M. Alencar, W. Lopes","doi":"10.1109/LATINCOM48065.2019.8937863","DOIUrl":"https://doi.org/10.1109/LATINCOM48065.2019.8937863","url":null,"abstract":"This paper presents a modeling for Multiple Input Multiple Output (MIMO) system considering the mutual coupling between antennas array elements. The conventional-mutual-impedance method (CMIM) and the receiving-mutual-impedance method (RMIM) are considered due to their and good approximation of real models. It presents a lower limit for the distance among the antenna elements in which the mutual coupling effect is minimized and also the importance of the CMIM and RMIM methods in MIMO systems projects. Simulation results show an improvement in the channel capacity performance of 9% for the MIMO 3×3 system.","PeriodicalId":120312,"journal":{"name":"2019 IEEE Latin-American Conference on Communications (LATINCOM)","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124674349","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-11-01DOI: 10.1109/LATINCOM48065.2019.8937974
P. Rocha, T. Pinheiro, R. Macedo, Francisco Airton Silva
Nowadays, Big Data is emerging as a crucial approach to manage the huge amount of information produced by the growing number of devices connected to the Internet of Things (IoT). The 10GbE Network Interface Card (NIC) is one of the most common hardware used to transmit and receive packets in data centers around the globe. High-speed packet capture and processing frameworks are used in conjunction with network adapters to process data. These frameworks try to process large quantities of packets without discard. However, it is difficult to realize which NIC combined with framework is more efficient considering multiple parameters. This paper presents a sensitivity analysis to evaluate the impact of two NIC brands (Intel and Chelsio) and two packet capture frameworks (Netmap and PF_Ring). The goal is to indicate which metric each combination has the most significant impact. Different combinations have stood out in specific scenarios, for example: (i) the Chelsio board in conjunction with the Netmap framework were able to offer a higher packages flow rate; (ii) the Chelsio board and the PF_Ring framework require fewer computational resources to process smaller packets; (iii) to process larger packages, the Intel board and Netmap framework are less demanding. Therefore, this work aims at assisting infrastructure managers to choose NICs and packet capture frameworks in a more efficient way.
{"title":"10GbE Network Card Performance Evaluation: A Strategy Based on Sensitivity Analysis","authors":"P. Rocha, T. Pinheiro, R. Macedo, Francisco Airton Silva","doi":"10.1109/LATINCOM48065.2019.8937974","DOIUrl":"https://doi.org/10.1109/LATINCOM48065.2019.8937974","url":null,"abstract":"Nowadays, Big Data is emerging as a crucial approach to manage the huge amount of information produced by the growing number of devices connected to the Internet of Things (IoT). The 10GbE Network Interface Card (NIC) is one of the most common hardware used to transmit and receive packets in data centers around the globe. High-speed packet capture and processing frameworks are used in conjunction with network adapters to process data. These frameworks try to process large quantities of packets without discard. However, it is difficult to realize which NIC combined with framework is more efficient considering multiple parameters. This paper presents a sensitivity analysis to evaluate the impact of two NIC brands (Intel and Chelsio) and two packet capture frameworks (Netmap and PF_Ring). The goal is to indicate which metric each combination has the most significant impact. Different combinations have stood out in specific scenarios, for example: (i) the Chelsio board in conjunction with the Netmap framework were able to offer a higher packages flow rate; (ii) the Chelsio board and the PF_Ring framework require fewer computational resources to process smaller packets; (iii) to process larger packages, the Intel board and Netmap framework are less demanding. Therefore, this work aims at assisting infrastructure managers to choose NICs and packet capture frameworks in a more efficient way.","PeriodicalId":120312,"journal":{"name":"2019 IEEE Latin-American Conference on Communications (LATINCOM)","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125623193","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-11-01DOI: 10.1109/LATINCOM48065.2019.8938006
Juan Pablo Duque Ordóñez, Angelly de Jesús Pugliese Viloria, Pedro Wightman Rojas
Location privacy was born to deal with protection privacy issues which came with the massification of georeferenced data due to the frequent use of phones, social media, GPS services and other applications. This georeferenced data can be directly connected to users' personal information like religion, health and tracking, and can be used for different purposes, such as local analysis or selling it to third party companies, which represents a risk for individuals when the information is published or robbed without any protection through a location privacy protection mechanism - LPPMs. Many LPPMs have been proposed in different papers, one of them is called VoKA, a K-Aggregation offline technique. The methodology explained in this paper takes the first part of VoKA, a gridification process, and then applies two different spatial clustering algorithms, K-Means and DBSCAN, in order to protect each point of a dataset. To explain how this mechanism works, a dataset of Dengue registers in Barranquilla-Colombia and its outskirts was used, taking into account that this kind of data is considered sensitive. The results explain how this dataset can fit better with one of the algorithms and its respective metrics using squared error, point loss and heatmap comparisons.
{"title":"Comparison of Spatial Clustering Techniques for Location Privacy","authors":"Juan Pablo Duque Ordóñez, Angelly de Jesús Pugliese Viloria, Pedro Wightman Rojas","doi":"10.1109/LATINCOM48065.2019.8938006","DOIUrl":"https://doi.org/10.1109/LATINCOM48065.2019.8938006","url":null,"abstract":"Location privacy was born to deal with protection privacy issues which came with the massification of georeferenced data due to the frequent use of phones, social media, GPS services and other applications. This georeferenced data can be directly connected to users' personal information like religion, health and tracking, and can be used for different purposes, such as local analysis or selling it to third party companies, which represents a risk for individuals when the information is published or robbed without any protection through a location privacy protection mechanism - LPPMs. Many LPPMs have been proposed in different papers, one of them is called VoKA, a K-Aggregation offline technique. The methodology explained in this paper takes the first part of VoKA, a gridification process, and then applies two different spatial clustering algorithms, K-Means and DBSCAN, in order to protect each point of a dataset. To explain how this mechanism works, a dataset of Dengue registers in Barranquilla-Colombia and its outskirts was used, taking into account that this kind of data is considered sensitive. The results explain how this dataset can fit better with one of the algorithms and its respective metrics using squared error, point loss and heatmap comparisons.","PeriodicalId":120312,"journal":{"name":"2019 IEEE Latin-American Conference on Communications (LATINCOM)","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115813596","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-11-01DOI: 10.1109/LATINCOM48065.2019.8937861
L. Bastos, D. Rosário, E. Cerqueira, A. Santos, M. N. Lima
With the growth of electronic E-health, wearable devices have highlighted due to its practicality and comfort in sensing of the personal data. Those devices gather biosignals for heart rate measurements, blood oxygenation checks, and identification. From sensors in the devices, electrocardiogram (ECG), photoplethysmogram (PPG) generate unique identifiers for use in identifying people, just like fingerprint, faceid, iris, among others. Every process with sensors has noises, electromagnetic waves, and movement that can interfere with the system when identifying, analyzing, and checking the individual. From this, filtering is an indispensable step in any process. The identification is divided into stages, and the main and essential is the filtering because where preprocessing starts. Thus, this work proposes a method of select filtering parameters for ECG and PPG signals and peaks extraction to the purpose the apply them for any dataset, following a sequence of steps, filtering, characteristics extraction (Peaks) and to affirm our model the correlation between filtered and raw waves performing a wave overlap. It is achieving an 80% correlation between raw waves and filtered waves.
{"title":"Filtering Parameters Selection Method and Peaks Extraction for ECG and PPG Signals","authors":"L. Bastos, D. Rosário, E. Cerqueira, A. Santos, M. N. Lima","doi":"10.1109/LATINCOM48065.2019.8937861","DOIUrl":"https://doi.org/10.1109/LATINCOM48065.2019.8937861","url":null,"abstract":"With the growth of electronic E-health, wearable devices have highlighted due to its practicality and comfort in sensing of the personal data. Those devices gather biosignals for heart rate measurements, blood oxygenation checks, and identification. From sensors in the devices, electrocardiogram (ECG), photoplethysmogram (PPG) generate unique identifiers for use in identifying people, just like fingerprint, faceid, iris, among others. Every process with sensors has noises, electromagnetic waves, and movement that can interfere with the system when identifying, analyzing, and checking the individual. From this, filtering is an indispensable step in any process. The identification is divided into stages, and the main and essential is the filtering because where preprocessing starts. Thus, this work proposes a method of select filtering parameters for ECG and PPG signals and peaks extraction to the purpose the apply them for any dataset, following a sequence of steps, filtering, characteristics extraction (Peaks) and to affirm our model the correlation between filtered and raw waves performing a wave overlap. It is achieving an 80% correlation between raw waves and filtered waves.","PeriodicalId":120312,"journal":{"name":"2019 IEEE Latin-American Conference on Communications (LATINCOM)","volume":"66 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131944585","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-11-01DOI: 10.1109/LATINCOM48065.2019.8937851
Joao V. C. Evangelista, Zeeshan Sattar, Georges Kaddoum
Massive Machine Type Communication (mMTC) is one of the three new applications of fifth-generation (5G) networks. Users in mMTC applications have different patterns of transmission and requirements than traditional LTE applications; they are massively deployed and transmit small packets of data sporadically. Grant-based access scheme is inefficient to satisfy the mMTC requirements; therefore, grant-free contention-based access (CBA) is appointed as a promising solution to this problem. In this paper, we analyze the performance of contention-based sparse code multiple access (SCMA) concerning the probability of success of transmission and the area spectral efficiency. We derive closed-form expressions for both performance metrics and validate them with numerical simulations. Furthermore, we compare the results with an OFDMA contention-based approach.
{"title":"Analysis of Contention-Based SCMA in mMTC Networks","authors":"Joao V. C. Evangelista, Zeeshan Sattar, Georges Kaddoum","doi":"10.1109/LATINCOM48065.2019.8937851","DOIUrl":"https://doi.org/10.1109/LATINCOM48065.2019.8937851","url":null,"abstract":"Massive Machine Type Communication (mMTC) is one of the three new applications of fifth-generation (5G) networks. Users in mMTC applications have different patterns of transmission and requirements than traditional LTE applications; they are massively deployed and transmit small packets of data sporadically. Grant-based access scheme is inefficient to satisfy the mMTC requirements; therefore, grant-free contention-based access (CBA) is appointed as a promising solution to this problem. In this paper, we analyze the performance of contention-based sparse code multiple access (SCMA) concerning the probability of success of transmission and the area spectral efficiency. We derive closed-form expressions for both performance metrics and validate them with numerical simulations. Furthermore, we compare the results with an OFDMA contention-based approach.","PeriodicalId":120312,"journal":{"name":"2019 IEEE Latin-American Conference on Communications (LATINCOM)","volume":"91 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127174676","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-11-01DOI: 10.1109/LATINCOM48065.2019.8937944
Laécio Rodrigues, P. Endo, Francisco Airton Silva
Hospital systems must be efficient to prevent loss of human lives. Low latency and high availability of resources are essential features to guarantee quality of service (QoS) in such environments. Taking advantage of Internet of Things (IoT) emergence, smart hospitals apper as a health revolution by capturing and transmitting patient data to physicians in real time through a wireless sensor network. For that, smart hospitals need local and remote servers for processing and storing data efficiently. Commonly, the patient information is shared among different devices, ensuring continuous operation and high availability. However, there is a significant difficulty in evaluating the performance of such systems in real contexts, because the failures are not tolerated (one can not unpluged the system to perform experiments) and the cost of a prototype implementation is high. To cover this issue, this paper adopts the analytical modeling approach to evaluate the performance of a smart hospital system, avoiding the investment in real equipment. Using Stochastic Petri Nets (SPNs), we propose a model to represent the architecture of a smart hospital, and estimate metrics related to the mean response time and resource utilization probability. The model are quite parametric, being possible to calibrate server resource capacity and service time. One can define 13 parameters, allowing to evaluate a large number of different scenarios. Results show that this work has the potential to assist hospital system administrators to plan more optimized architectures according to their needs.
{"title":"Stochastic Model for Evaluating Smart Hospitals Performance","authors":"Laécio Rodrigues, P. Endo, Francisco Airton Silva","doi":"10.1109/LATINCOM48065.2019.8937944","DOIUrl":"https://doi.org/10.1109/LATINCOM48065.2019.8937944","url":null,"abstract":"Hospital systems must be efficient to prevent loss of human lives. Low latency and high availability of resources are essential features to guarantee quality of service (QoS) in such environments. Taking advantage of Internet of Things (IoT) emergence, smart hospitals apper as a health revolution by capturing and transmitting patient data to physicians in real time through a wireless sensor network. For that, smart hospitals need local and remote servers for processing and storing data efficiently. Commonly, the patient information is shared among different devices, ensuring continuous operation and high availability. However, there is a significant difficulty in evaluating the performance of such systems in real contexts, because the failures are not tolerated (one can not unpluged the system to perform experiments) and the cost of a prototype implementation is high. To cover this issue, this paper adopts the analytical modeling approach to evaluate the performance of a smart hospital system, avoiding the investment in real equipment. Using Stochastic Petri Nets (SPNs), we propose a model to represent the architecture of a smart hospital, and estimate metrics related to the mean response time and resource utilization probability. The model are quite parametric, being possible to calibrate server resource capacity and service time. One can define 13 parameters, allowing to evaluate a large number of different scenarios. Results show that this work has the potential to assist hospital system administrators to plan more optimized architectures according to their needs.","PeriodicalId":120312,"journal":{"name":"2019 IEEE Latin-American Conference on Communications (LATINCOM)","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117138887","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-11-01DOI: 10.1109/LATINCOM48065.2019.8937857
Ingrid Nascimento, Ricardo S. Souza, Silvia Lins, Andrey Silva, A. Klautau
Fifth-generation wireless technologies embrace more flexible network architectures as a way of reducing deployment and operation costs while increasing user satisfaction. Centralized Radio Access Networks (C-RANs) play a fundamental role in this context, being envisioned for increased flexibility and lower cost of deployment. More recent C-RAN architectures assume packetized fronthaul links connecting radio units to baseband processors, a more cost-efficient solution relying on statistical multiplexing. This shared infrastructure scenario brings new challenges, including network congestion in the fronthaul links. Since current solutions may neither scale nor react in time for the microsecond-order delay requirements, this paper evaluates the adoption of machine learning-based techniques for congestion control in C-RAN scenarios. Deep Reinforcement Learning methods were evaluated through discrete-event simulations and compared with legacy TCP-based solutions. Promising results were found with satisfactory throughput level in all simulated scenarios, achieving low rates of average delay and packet loss compared with the TCP congestion control baseline.
{"title":"Deep Reinforcement Learning Applied to Congestion Control in Fronthaul Networks","authors":"Ingrid Nascimento, Ricardo S. Souza, Silvia Lins, Andrey Silva, A. Klautau","doi":"10.1109/LATINCOM48065.2019.8937857","DOIUrl":"https://doi.org/10.1109/LATINCOM48065.2019.8937857","url":null,"abstract":"Fifth-generation wireless technologies embrace more flexible network architectures as a way of reducing deployment and operation costs while increasing user satisfaction. Centralized Radio Access Networks (C-RANs) play a fundamental role in this context, being envisioned for increased flexibility and lower cost of deployment. More recent C-RAN architectures assume packetized fronthaul links connecting radio units to baseband processors, a more cost-efficient solution relying on statistical multiplexing. This shared infrastructure scenario brings new challenges, including network congestion in the fronthaul links. Since current solutions may neither scale nor react in time for the microsecond-order delay requirements, this paper evaluates the adoption of machine learning-based techniques for congestion control in C-RAN scenarios. Deep Reinforcement Learning methods were evaluated through discrete-event simulations and compared with legacy TCP-based solutions. Promising results were found with satisfactory throughput level in all simulated scenarios, achieving low rates of average delay and packet loss compared with the TCP congestion control baseline.","PeriodicalId":120312,"journal":{"name":"2019 IEEE Latin-American Conference on Communications (LATINCOM)","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124819342","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-11-01DOI: 10.1109/LATINCOM48065.2019.8938020
Md. Sahabul Alam, Bassant Selim, Georges Kaddoum
Impulsive noise is a common impediment in many wireless, power line communication (PLC), and smart grid communication systems that prevents the system from achieving error-free transmission. In this paper, we compare and analyze several impulsive noise mitigation techniques for Middleton class-A noise considering single carrier modulation with low-density parity-check (LDPC) coded transmission. For this, we investigate the widely used non-linear methods such as clipping, blanking, and combined clipping/blanking to mitigate the noxious effects of impulsive noise. Although, the performance of these techniques are widely acknowledged for simple Bernoulli-Gaussian impulsive noise mitigation in case of orthogonal frequency division multiplexing (OFDM)-based multi-carrier communication systems, their mitigation capabilities in regards to Middleton class-A noise remains unknown. We further investigate the log-likelihood ratio (LLR)-based impulsive noise mitigation. Simulation results are provided to highlight the robustness of the LLR-based mitigation scheme over simple clipping/blanking schemes for the considered scenario. Moreover, our results show that while clipping performs better than blanking for Bernoulli-Gaussian noise, the later shows better performance in case of Middleton class-A noise.
{"title":"Analysis and Comparison of Several Mitigation Techniques for Middleton Class-A Noise","authors":"Md. Sahabul Alam, Bassant Selim, Georges Kaddoum","doi":"10.1109/LATINCOM48065.2019.8938020","DOIUrl":"https://doi.org/10.1109/LATINCOM48065.2019.8938020","url":null,"abstract":"Impulsive noise is a common impediment in many wireless, power line communication (PLC), and smart grid communication systems that prevents the system from achieving error-free transmission. In this paper, we compare and analyze several impulsive noise mitigation techniques for Middleton class-A noise considering single carrier modulation with low-density parity-check (LDPC) coded transmission. For this, we investigate the widely used non-linear methods such as clipping, blanking, and combined clipping/blanking to mitigate the noxious effects of impulsive noise. Although, the performance of these techniques are widely acknowledged for simple Bernoulli-Gaussian impulsive noise mitigation in case of orthogonal frequency division multiplexing (OFDM)-based multi-carrier communication systems, their mitigation capabilities in regards to Middleton class-A noise remains unknown. We further investigate the log-likelihood ratio (LLR)-based impulsive noise mitigation. Simulation results are provided to highlight the robustness of the LLR-based mitigation scheme over simple clipping/blanking schemes for the considered scenario. Moreover, our results show that while clipping performs better than blanking for Bernoulli-Gaussian noise, the later shows better performance in case of Middleton class-A noise.","PeriodicalId":120312,"journal":{"name":"2019 IEEE Latin-American Conference on Communications (LATINCOM)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130992203","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}