Pub Date : 2021-12-01DOI: 10.1109/GLOBECOM46510.2021.9685569
Phuong Ha, Lisong Xu
Multipath TCP (MPTCP) has captured the networking community's attention in recent years since it simultaneously transfers data over multiple network interfaces, thus increases the performance and stability. Existing works on MPTCP study its performance only in traditional wired and wireless networks. Meanwhile, cloud computing has been growing rapidly with lots of applications deployed in private and public clouds, where virtual machine (VM) scheduling techniques are often adopted to share physical CPUs among VMs. This motivates us to study MPTCP's performance under VM scheduling impact. For the first time, we show that VM scheduling negatively impacts all MPTCP subflows' throughput. Specifically, VM scheduling causes the inaccuracy in computing the overall aggressiveness parameter of MPTCP congestion control, which leads to the slow increment of the congestion windows of all MPTCP subflows instead of just a single subflow. This finally results in a poor overall performance of MPTCP in cloud networks. We propose a modified version for MPTCP, which considers VM scheduling noises when MPTCP computes its overall aggressiveness parameter and its congestion windows. Experimental results show that our modified MPTCP performs considerably better (with up to 80% throughput improvement) than the original MPTCP in cloud networks.
{"title":"MPTCP under Virtual Machine Scheduling Impact","authors":"Phuong Ha, Lisong Xu","doi":"10.1109/GLOBECOM46510.2021.9685569","DOIUrl":"https://doi.org/10.1109/GLOBECOM46510.2021.9685569","url":null,"abstract":"Multipath TCP (MPTCP) has captured the networking community's attention in recent years since it simultaneously transfers data over multiple network interfaces, thus increases the performance and stability. Existing works on MPTCP study its performance only in traditional wired and wireless networks. Meanwhile, cloud computing has been growing rapidly with lots of applications deployed in private and public clouds, where virtual machine (VM) scheduling techniques are often adopted to share physical CPUs among VMs. This motivates us to study MPTCP's performance under VM scheduling impact. For the first time, we show that VM scheduling negatively impacts all MPTCP subflows' throughput. Specifically, VM scheduling causes the inaccuracy in computing the overall aggressiveness parameter of MPTCP congestion control, which leads to the slow increment of the congestion windows of all MPTCP subflows instead of just a single subflow. This finally results in a poor overall performance of MPTCP in cloud networks. We propose a modified version for MPTCP, which considers VM scheduling noises when MPTCP computes its overall aggressiveness parameter and its congestion windows. Experimental results show that our modified MPTCP performs considerably better (with up to 80% throughput improvement) than the original MPTCP in cloud networks.","PeriodicalId":200641,"journal":{"name":"2021 IEEE Global Communications Conference (GLOBECOM)","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124592745","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-12-01DOI: 10.1109/GLOBECOM46510.2021.9685745
Wei Yao, Kuan Zhang, Chong Yu, Hai Zhao
With the thriving of wearable devices and the widespread use of smartphones, the e-healthcare system emerges to cope with the high demand of health services. However, this integrated smart health system is vulnerable to various attacks, including intrusion attacks. Traditional detection schemes generally lack the classifier diversity to identify attacks in complex scenarios that contain a small amount of training data. Moreover, the use of cloud-based attack detection may result in higher detection latency. In this paper, we propose an Edge-assisted Anomaly Detection (EAD) scheme to detect malicious attacks. Specifically, we first identify four types of attackers according to their attacking capabilities. To distinguish attacks from normal behaviors, we then propose a wrapper feature selection method. This selection method eliminates the impact of irrelevant and redundant features so that the detection accuracy can be improved. Moreover, we investigate the diversity of classifiers and exploit ensemble learning to improve the detection rate. To reduce high detection latency in the cloud, edge nodes are used to concurrently implement the proposed lightweight scheme. We evaluate the EAD performance based on two real-world datasets, i.e., NSL-KDD and UNSW-NB15 datasets. The simulation results show that the EAD outperforms other state-of-the-art methods in terms of accuracy, detection rate, and computational complexity. The analysis of detection time validates the fast detection of the proposed EAD compared with cloud-assisted schemes.
{"title":"Exploiting Ensemble Learning for Edge-assisted Anomaly Detection Scheme in e-healthcare System","authors":"Wei Yao, Kuan Zhang, Chong Yu, Hai Zhao","doi":"10.1109/GLOBECOM46510.2021.9685745","DOIUrl":"https://doi.org/10.1109/GLOBECOM46510.2021.9685745","url":null,"abstract":"With the thriving of wearable devices and the widespread use of smartphones, the e-healthcare system emerges to cope with the high demand of health services. However, this integrated smart health system is vulnerable to various attacks, including intrusion attacks. Traditional detection schemes generally lack the classifier diversity to identify attacks in complex scenarios that contain a small amount of training data. Moreover, the use of cloud-based attack detection may result in higher detection latency. In this paper, we propose an Edge-assisted Anomaly Detection (EAD) scheme to detect malicious attacks. Specifically, we first identify four types of attackers according to their attacking capabilities. To distinguish attacks from normal behaviors, we then propose a wrapper feature selection method. This selection method eliminates the impact of irrelevant and redundant features so that the detection accuracy can be improved. Moreover, we investigate the diversity of classifiers and exploit ensemble learning to improve the detection rate. To reduce high detection latency in the cloud, edge nodes are used to concurrently implement the proposed lightweight scheme. We evaluate the EAD performance based on two real-world datasets, i.e., NSL-KDD and UNSW-NB15 datasets. The simulation results show that the EAD outperforms other state-of-the-art methods in terms of accuracy, detection rate, and computational complexity. The analysis of detection time validates the fast detection of the proposed EAD compared with cloud-assisted schemes.","PeriodicalId":200641,"journal":{"name":"2021 IEEE Global Communications Conference (GLOBECOM)","volume":"36 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124697346","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-12-01DOI: 10.1109/GLOBECOM46510.2021.9685107
Yaomin Zhang, Haijun Zhang, Huang‐Cheng Zhou, Wei Li
In this paper, an uplink non-orthogonal multiple access (NOMA) satellite-terrestrial network is investigated, where the terrestrial base stations (BSs) can simultaneously communicate with the satellite by backhaul, and user equipments (UEs) share fronthaul spectrum resource to communicate. The communication of satellite UEs is influenced by crosstier interference caused by terrestrial cellular UEs. Thus, a utility function which consists of system achieved rate and crosstier interference is build. And we aim to maximize the utility function while satisfying the constraints of the varying backhaul rate and quality of service (QoS) of UEs. The optimization problem is decomposed into AP-UE association, bandwidth assignment, and power allocation sub-problems, and solved by proposed matching algorithm and successive convex approximation (SCA) method, respectively. The simulation results show the effectiveness of the proposed algorithm.
{"title":"Interference Cooperation based Resource Allocation in NOMA Terrestrial-Satellite Networks","authors":"Yaomin Zhang, Haijun Zhang, Huang‐Cheng Zhou, Wei Li","doi":"10.1109/GLOBECOM46510.2021.9685107","DOIUrl":"https://doi.org/10.1109/GLOBECOM46510.2021.9685107","url":null,"abstract":"In this paper, an uplink non-orthogonal multiple access (NOMA) satellite-terrestrial network is investigated, where the terrestrial base stations (BSs) can simultaneously communicate with the satellite by backhaul, and user equipments (UEs) share fronthaul spectrum resource to communicate. The communication of satellite UEs is influenced by crosstier interference caused by terrestrial cellular UEs. Thus, a utility function which consists of system achieved rate and crosstier interference is build. And we aim to maximize the utility function while satisfying the constraints of the varying backhaul rate and quality of service (QoS) of UEs. The optimization problem is decomposed into AP-UE association, bandwidth assignment, and power allocation sub-problems, and solved by proposed matching algorithm and successive convex approximation (SCA) method, respectively. The simulation results show the effectiveness of the proposed algorithm.","PeriodicalId":200641,"journal":{"name":"2021 IEEE Global Communications Conference (GLOBECOM)","volume":"83 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130406333","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-12-01DOI: 10.1109/GLOBECOM46510.2021.9685227
Cheng-Han Chou, Bi-Ru Dai
Nowadays, people receive an enormous amount of information from day to day. However, they are only interested in information which matches their preferences. Thus, retrieving such information becomes an significant task, in our case, the reviews composed by users. Matrix Factorization (MF) based methods achieve fairly good performances on recommendation tasks. However, there exist several crucial issues with MF - based methods such as cold-start problems and data sparseness. In order to address the above issues, numerous recommendation models are proposed which obtained stellar performances. Nonetheless, we figured that there is not a more comprehensive framework that enhances its performance through retrieving user preference and item trend. Hence, we propose a novel approach to tackle the aforementioned issues. A hierarchical construction with user preference and item trend capturing is employed in this proposed framework. The performance excels in comparison to state-of-the-art models by testing on several real-world datasets. Experimental results verified that our framework can extract useful features even under sparse data.
{"title":"Semantic Analysis and Preference Capturing on Attentive Networks for Rating Prediction","authors":"Cheng-Han Chou, Bi-Ru Dai","doi":"10.1109/GLOBECOM46510.2021.9685227","DOIUrl":"https://doi.org/10.1109/GLOBECOM46510.2021.9685227","url":null,"abstract":"Nowadays, people receive an enormous amount of information from day to day. However, they are only interested in information which matches their preferences. Thus, retrieving such information becomes an significant task, in our case, the reviews composed by users. Matrix Factorization (MF) based methods achieve fairly good performances on recommendation tasks. However, there exist several crucial issues with MF - based methods such as cold-start problems and data sparseness. In order to address the above issues, numerous recommendation models are proposed which obtained stellar performances. Nonetheless, we figured that there is not a more comprehensive framework that enhances its performance through retrieving user preference and item trend. Hence, we propose a novel approach to tackle the aforementioned issues. A hierarchical construction with user preference and item trend capturing is employed in this proposed framework. The performance excels in comparison to state-of-the-art models by testing on several real-world datasets. Experimental results verified that our framework can extract useful features even under sparse data.","PeriodicalId":200641,"journal":{"name":"2021 IEEE Global Communications Conference (GLOBECOM)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130522384","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-12-01DOI: 10.1109/GLOBECOM46510.2021.9685804
Yin Fang, A. Diallo, Chaoyun Zhang, P. Patras
Data-driven mobile network management hinges on accurate traffic measurements, which routinely require expensive specialized equipment and substantial local storage capabilities, and bear high data transfer overheads. To overcome these challenges, in this paper we propose Spider, a deep-learning-driven mobile traffic measurement collection and reconstruction framework, which reduces the cost of data collection while retaining state-of-the-art accuracy in inferring mobile traffic consumption with fine geographic granularity. Spider harnesses Reinforcement Learning and tackles large action spaces to train a policy network that selectively samples a minimal number of cells where data should be collected. We further introduce a fast and accurate neural model that extracts spatiotemporal correlations from historical data to reconstruct network-wide traffic consumption based on sparse measurements. Experiments we conduct with a real-world mobile traffic dataset demonstrate that Spider samples 48% fewer cells as compared to several benchmarks considered, and yields up to 67% lower reconstruction errors than state-of-the-art interpolation methods. Moreover, our framework can adapt to previously unseen traffic patterns.
{"title":"Spider: Deep Learning-driven Sparse Mobile Traffic Measurement Collection and Reconstruction","authors":"Yin Fang, A. Diallo, Chaoyun Zhang, P. Patras","doi":"10.1109/GLOBECOM46510.2021.9685804","DOIUrl":"https://doi.org/10.1109/GLOBECOM46510.2021.9685804","url":null,"abstract":"Data-driven mobile network management hinges on accurate traffic measurements, which routinely require expensive specialized equipment and substantial local storage capabilities, and bear high data transfer overheads. To overcome these challenges, in this paper we propose Spider, a deep-learning-driven mobile traffic measurement collection and reconstruction framework, which reduces the cost of data collection while retaining state-of-the-art accuracy in inferring mobile traffic consumption with fine geographic granularity. Spider harnesses Reinforcement Learning and tackles large action spaces to train a policy network that selectively samples a minimal number of cells where data should be collected. We further introduce a fast and accurate neural model that extracts spatiotemporal correlations from historical data to reconstruct network-wide traffic consumption based on sparse measurements. Experiments we conduct with a real-world mobile traffic dataset demonstrate that Spider samples 48% fewer cells as compared to several benchmarks considered, and yields up to 67% lower reconstruction errors than state-of-the-art interpolation methods. Moreover, our framework can adapt to previously unseen traffic patterns.","PeriodicalId":200641,"journal":{"name":"2021 IEEE Global Communications Conference (GLOBECOM)","volume":"57 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130583027","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-12-01DOI: 10.1109/GLOBECOM46510.2021.9685119
J. Misic, V. Mišić, Xiaolin Chang
The well known Practical Byzantine Fault Tolerance (PBFT) consensus algorithm is not well suited to blockchain-based Internet of Things (IoT) systems which cover large geographical areas. To reduce queuing delays and eliminates a permanent leader as a single point of failure, we use a multiple entry, multi-tier PBFT architecture and investigate the distribution of orderers that will lead to minimization of the total delay from the reception of a block of IoT data to the moment it is linked to the global blockchain. Our results indicate that the total number of orderers for given system coverage and total load are main determinants of the block linking time. We show that, given the dimensions of an area and the number of orderers, partitioning the orderers into a smaller number of tiers with more clusters will lead to lower block linking time. These observations may be used in the process of planning and dimensioning of multi-tier cluster architectures for blockchain-enabled IoT systems.
{"title":"Trade-offs in large blockchain-based IoT system design","authors":"J. Misic, V. Mišić, Xiaolin Chang","doi":"10.1109/GLOBECOM46510.2021.9685119","DOIUrl":"https://doi.org/10.1109/GLOBECOM46510.2021.9685119","url":null,"abstract":"The well known Practical Byzantine Fault Tolerance (PBFT) consensus algorithm is not well suited to blockchain-based Internet of Things (IoT) systems which cover large geographical areas. To reduce queuing delays and eliminates a permanent leader as a single point of failure, we use a multiple entry, multi-tier PBFT architecture and investigate the distribution of orderers that will lead to minimization of the total delay from the reception of a block of IoT data to the moment it is linked to the global blockchain. Our results indicate that the total number of orderers for given system coverage and total load are main determinants of the block linking time. We show that, given the dimensions of an area and the number of orderers, partitioning the orderers into a smaller number of tiers with more clusters will lead to lower block linking time. These observations may be used in the process of planning and dimensioning of multi-tier cluster architectures for blockchain-enabled IoT systems.","PeriodicalId":200641,"journal":{"name":"2021 IEEE Global Communications Conference (GLOBECOM)","volume":"107 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130646215","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-12-01DOI: 10.1109/GLOBECOM46510.2021.9685696
Yanna Bai, Wei Chen, Yuan Ma, Ning Wang, Bo Ai
In massive machine-type communications (mMTC), the conflict between millions of potential access devices and limited channel freedom leads to a sharp decrease in spectral efficiency. The sparse nature of mMTC provides a solution by using compressive sensing (CS) to perform multiuser detection (MUD) but suffers conflict between the high computation complexity and low latency requirements. In this paper, we propose a novel Dual-network for joint channel estimation and data recovery. The proposed Dual-Net utilizes the sparse consistency between the channel vector and data matrix of all users. Experimental results show that the proposed Dual-Net outperforms existing CS algorithms and general neural networks in computation complexity and accuracy, which means reduced access delay and more supported devices.
{"title":"Dual-Net for Joint Channel Estimation and Data Recovery in Grant-free Massive Access","authors":"Yanna Bai, Wei Chen, Yuan Ma, Ning Wang, Bo Ai","doi":"10.1109/GLOBECOM46510.2021.9685696","DOIUrl":"https://doi.org/10.1109/GLOBECOM46510.2021.9685696","url":null,"abstract":"In massive machine-type communications (mMTC), the conflict between millions of potential access devices and limited channel freedom leads to a sharp decrease in spectral efficiency. The sparse nature of mMTC provides a solution by using compressive sensing (CS) to perform multiuser detection (MUD) but suffers conflict between the high computation complexity and low latency requirements. In this paper, we propose a novel Dual-network for joint channel estimation and data recovery. The proposed Dual-Net utilizes the sparse consistency between the channel vector and data matrix of all users. Experimental results show that the proposed Dual-Net outperforms existing CS algorithms and general neural networks in computation complexity and accuracy, which means reduced access delay and more supported devices.","PeriodicalId":200641,"journal":{"name":"2021 IEEE Global Communications Conference (GLOBECOM)","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123975674","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-12-01DOI: 10.1109/GLOBECOM46510.2021.9685828
Chaogang Tang, Chunsheng Zhu, Huaming Wu, Chunyan Liu, J. Rodrigues
The fast-growing Internet of Thing (IoT) has generated a vast number of tasks which need to be performed efficiently. Owing to the drawback of the sensor-to-cloud computing paradigm in IoT, mobile edge computing (MEC) has become a hot topic recently. Against this backdrop, we focus on the offloading of tasks characterized by intrinsic correlations in this paper, which have not been considered in most of existing works. For the sequential arrival of such correlated tasks, the future workload can be efficiently reduced by caching the current computational result. Specifically, we resort to the Lyapunov optimization to handle the long-term constraint on energy consumption. Simulation results reveal that our approach is superior to other approaches in the optimization of response latency and energy consumption.
{"title":"Caching Assisted Correlated Task Offloading for IoT Devices in Mobile Edge Computing","authors":"Chaogang Tang, Chunsheng Zhu, Huaming Wu, Chunyan Liu, J. Rodrigues","doi":"10.1109/GLOBECOM46510.2021.9685828","DOIUrl":"https://doi.org/10.1109/GLOBECOM46510.2021.9685828","url":null,"abstract":"The fast-growing Internet of Thing (IoT) has generated a vast number of tasks which need to be performed efficiently. Owing to the drawback of the sensor-to-cloud computing paradigm in IoT, mobile edge computing (MEC) has become a hot topic recently. Against this backdrop, we focus on the offloading of tasks characterized by intrinsic correlations in this paper, which have not been considered in most of existing works. For the sequential arrival of such correlated tasks, the future workload can be efficiently reduced by caching the current computational result. Specifically, we resort to the Lyapunov optimization to handle the long-term constraint on energy consumption. Simulation results reveal that our approach is superior to other approaches in the optimization of response latency and energy consumption.","PeriodicalId":200641,"journal":{"name":"2021 IEEE Global Communications Conference (GLOBECOM)","volume":"287 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121285360","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-12-01DOI: 10.1109/GLOBECOM46510.2021.9685285
Özgür Alaca, S. Althunibat, Serhan Yarkan, Scott L. Miller, K. Qaraqe
The recently proposed index modulation-based up-link orthogonal frequency division multiple access (IM-OFDMA) scheme has outperformed the conventional schemes in terms of spectral efficiency and error performance. However, the induced computational complexity at the receiver forms a bottleneck in real-time implementation due to the joint detection of all users. In this paper, based on deep learning principles, a convolutional neural network (CNN)-based signal detector is proposed for data detection in IM-OFDMA systems instead of the optimum Maximum Likelihood (ML) detector. A CNN-based detector is constructed with the created dataset of the IM-OFDMA transmission by offline training. Then, the convolutional neural network (CNN)-based detector is directly applied to the IM-OFMDA communication scheme to detect the transmitted signal by treating the received signal and channel state information (CSI) as inputs. The proposed CNN-based detector is able to reduce the order of the computational complexity from O(n2n) to O(n2) as compared to the ML detector with a slight impact on the error performance.
{"title":"CNN-Based Signal Detector for IM-OFDMA","authors":"Özgür Alaca, S. Althunibat, Serhan Yarkan, Scott L. Miller, K. Qaraqe","doi":"10.1109/GLOBECOM46510.2021.9685285","DOIUrl":"https://doi.org/10.1109/GLOBECOM46510.2021.9685285","url":null,"abstract":"The recently proposed index modulation-based up-link orthogonal frequency division multiple access (IM-OFDMA) scheme has outperformed the conventional schemes in terms of spectral efficiency and error performance. However, the induced computational complexity at the receiver forms a bottleneck in real-time implementation due to the joint detection of all users. In this paper, based on deep learning principles, a convolutional neural network (CNN)-based signal detector is proposed for data detection in IM-OFDMA systems instead of the optimum Maximum Likelihood (ML) detector. A CNN-based detector is constructed with the created dataset of the IM-OFDMA transmission by offline training. Then, the convolutional neural network (CNN)-based detector is directly applied to the IM-OFMDA communication scheme to detect the transmitted signal by treating the received signal and channel state information (CSI) as inputs. The proposed CNN-based detector is able to reduce the order of the computational complexity from O(n2n) to O(n2) as compared to the ML detector with a slight impact on the error performance.","PeriodicalId":200641,"journal":{"name":"2021 IEEE Global Communications Conference (GLOBECOM)","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114197294","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Extensive literature has shown the possibility of using WiFi to sense large scale environmental features such as people, movement, and human gestures. To our best knowledge, there has been no investigation on identifying the microscopic changes in a channel due to atmospheric temperature variations. We identify this as a real world use case, since there are scenarios such as Data Centres where WiFi traffic is omnipresent and temperature monitoring is important. We develop a framework for sensing temperature using WiFi Channel State Information (CSI), proposing that the increased kinetic energy of ambient gas particles will affect the wireless link. To validate this, our paper uses low wavelength 5GHz WiFi CSI from commodity hardware to measure how the channel changes as the ambient temperature is raised. Empirically, we demonstrate that the CSI amplitude value drops at a rate of 13 per degree Celsius rise in the ambient temperature based on the testing platform, and developed regressions models with ± 1°C accuracy in the majority of cases. Moreover, we have shown that WiFi subcarriers exhibit a frequency-selective behaviour in their varying responses to the rise in ambient temperature.
{"title":"Thermal Profiling by WiFi Sensing in IoT Networks","authors":"Junye Li, Aryan Sharma, Deepak Mishra, Aruna Seneviratne","doi":"10.1109/GLOBECOM46510.2021.9686022","DOIUrl":"https://doi.org/10.1109/GLOBECOM46510.2021.9686022","url":null,"abstract":"Extensive literature has shown the possibility of using WiFi to sense large scale environmental features such as people, movement, and human gestures. To our best knowledge, there has been no investigation on identifying the microscopic changes in a channel due to atmospheric temperature variations. We identify this as a real world use case, since there are scenarios such as Data Centres where WiFi traffic is omnipresent and temperature monitoring is important. We develop a framework for sensing temperature using WiFi Channel State Information (CSI), proposing that the increased kinetic energy of ambient gas particles will affect the wireless link. To validate this, our paper uses low wavelength 5GHz WiFi CSI from commodity hardware to measure how the channel changes as the ambient temperature is raised. Empirically, we demonstrate that the CSI amplitude value drops at a rate of 13 per degree Celsius rise in the ambient temperature based on the testing platform, and developed regressions models with ± 1°C accuracy in the majority of cases. Moreover, we have shown that WiFi subcarriers exhibit a frequency-selective behaviour in their varying responses to the rise in ambient temperature.","PeriodicalId":200641,"journal":{"name":"2021 IEEE Global Communications Conference (GLOBECOM)","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114208849","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}