首页 > 最新文献

2021 IEEE Global Communications Conference (GLOBECOM)最新文献

英文 中文
MPTCP under Virtual Machine Scheduling Impact 虚拟机调度影响下的MPTCP
Pub Date : 2021-12-01 DOI: 10.1109/GLOBECOM46510.2021.9685569
Phuong Ha, Lisong Xu
Multipath TCP (MPTCP) has captured the networking community's attention in recent years since it simultaneously transfers data over multiple network interfaces, thus increases the performance and stability. Existing works on MPTCP study its performance only in traditional wired and wireless networks. Meanwhile, cloud computing has been growing rapidly with lots of applications deployed in private and public clouds, where virtual machine (VM) scheduling techniques are often adopted to share physical CPUs among VMs. This motivates us to study MPTCP's performance under VM scheduling impact. For the first time, we show that VM scheduling negatively impacts all MPTCP subflows' throughput. Specifically, VM scheduling causes the inaccuracy in computing the overall aggressiveness parameter of MPTCP congestion control, which leads to the slow increment of the congestion windows of all MPTCP subflows instead of just a single subflow. This finally results in a poor overall performance of MPTCP in cloud networks. We propose a modified version for MPTCP, which considers VM scheduling noises when MPTCP computes its overall aggressiveness parameter and its congestion windows. Experimental results show that our modified MPTCP performs considerably better (with up to 80% throughput improvement) than the original MPTCP in cloud networks.
多路径TCP (MPTCP)近年来引起了网络社区的关注,因为它可以通过多个网络接口同时传输数据,从而提高了性能和稳定性。现有的关于MPTCP协议的研究只局限于传统的有线和无线网络。与此同时,云计算发展迅速,在私有云和公共云中部署了大量的应用程序,在这些应用程序中,通常采用虚拟机调度技术在虚拟机之间共享物理cpu。这促使我们研究MPTCP在VM调度影响下的性能。我们首次证明了虚拟机调度对所有MPTCP子流的吞吐量产生负面影响。具体来说,虚拟机调度导致MPTCP拥塞控制总体侵略性参数计算不准确,导致所有MPTCP子流的拥塞窗口增量缓慢,而不仅仅是单个子流。这最终导致MPTCP在云网络中的整体性能较差。我们提出了一个改进的MPTCP版本,该版本在MPTCP计算其总体侵略性参数和拥塞窗口时考虑了虚拟机调度噪声。实验结果表明,改进后的MPTCP在云网络中的性能比原来的MPTCP好得多(吞吐量提高了80%)。
{"title":"MPTCP under Virtual Machine Scheduling Impact","authors":"Phuong Ha, Lisong Xu","doi":"10.1109/GLOBECOM46510.2021.9685569","DOIUrl":"https://doi.org/10.1109/GLOBECOM46510.2021.9685569","url":null,"abstract":"Multipath TCP (MPTCP) has captured the networking community's attention in recent years since it simultaneously transfers data over multiple network interfaces, thus increases the performance and stability. Existing works on MPTCP study its performance only in traditional wired and wireless networks. Meanwhile, cloud computing has been growing rapidly with lots of applications deployed in private and public clouds, where virtual machine (VM) scheduling techniques are often adopted to share physical CPUs among VMs. This motivates us to study MPTCP's performance under VM scheduling impact. For the first time, we show that VM scheduling negatively impacts all MPTCP subflows' throughput. Specifically, VM scheduling causes the inaccuracy in computing the overall aggressiveness parameter of MPTCP congestion control, which leads to the slow increment of the congestion windows of all MPTCP subflows instead of just a single subflow. This finally results in a poor overall performance of MPTCP in cloud networks. We propose a modified version for MPTCP, which considers VM scheduling noises when MPTCP computes its overall aggressiveness parameter and its congestion windows. Experimental results show that our modified MPTCP performs considerably better (with up to 80% throughput improvement) than the original MPTCP in cloud networks.","PeriodicalId":200641,"journal":{"name":"2021 IEEE Global Communications Conference (GLOBECOM)","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124592745","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Exploiting Ensemble Learning for Edge-assisted Anomaly Detection Scheme in e-healthcare System 基于集成学习的电子医疗系统边缘辅助异常检测方案
Pub Date : 2021-12-01 DOI: 10.1109/GLOBECOM46510.2021.9685745
Wei Yao, Kuan Zhang, Chong Yu, Hai Zhao
With the thriving of wearable devices and the widespread use of smartphones, the e-healthcare system emerges to cope with the high demand of health services. However, this integrated smart health system is vulnerable to various attacks, including intrusion attacks. Traditional detection schemes generally lack the classifier diversity to identify attacks in complex scenarios that contain a small amount of training data. Moreover, the use of cloud-based attack detection may result in higher detection latency. In this paper, we propose an Edge-assisted Anomaly Detection (EAD) scheme to detect malicious attacks. Specifically, we first identify four types of attackers according to their attacking capabilities. To distinguish attacks from normal behaviors, we then propose a wrapper feature selection method. This selection method eliminates the impact of irrelevant and redundant features so that the detection accuracy can be improved. Moreover, we investigate the diversity of classifiers and exploit ensemble learning to improve the detection rate. To reduce high detection latency in the cloud, edge nodes are used to concurrently implement the proposed lightweight scheme. We evaluate the EAD performance based on two real-world datasets, i.e., NSL-KDD and UNSW-NB15 datasets. The simulation results show that the EAD outperforms other state-of-the-art methods in terms of accuracy, detection rate, and computational complexity. The analysis of detection time validates the fast detection of the proposed EAD compared with cloud-assisted schemes.
随着可穿戴设备的蓬勃发展和智能手机的广泛使用,电子医疗系统应运而生,以应对医疗服务的高需求。然而,这种集成的智能健康系统容易受到各种攻击,包括入侵攻击。传统的检测方案通常缺乏分类器的多样性,无法在包含少量训练数据的复杂场景中识别攻击。此外,使用基于云的攻击检测可能会导致更高的检测延迟。在本文中,我们提出了一种边缘辅助异常检测(EAD)方案来检测恶意攻击。具体来说,我们首先根据攻击能力识别出四种类型的攻击者。为了区分攻击和正常行为,我们提出了一种包装器特征选择方法。这种选择方法消除了不相关和冗余特征的影响,从而提高了检测精度。此外,我们研究了分类器的多样性,并利用集成学习来提高检测率。为了减少云中的高检测延迟,使用边缘节点并发实现所提出的轻量级方案。我们基于两个实际数据集,即NSL-KDD和UNSW-NB15数据集来评估EAD的性能。仿真结果表明,该方法在准确率、检测率和计算复杂度方面都优于其他先进的方法。检测时间的分析验证了与云辅助方案相比,所提出的EAD检测速度更快。
{"title":"Exploiting Ensemble Learning for Edge-assisted Anomaly Detection Scheme in e-healthcare System","authors":"Wei Yao, Kuan Zhang, Chong Yu, Hai Zhao","doi":"10.1109/GLOBECOM46510.2021.9685745","DOIUrl":"https://doi.org/10.1109/GLOBECOM46510.2021.9685745","url":null,"abstract":"With the thriving of wearable devices and the widespread use of smartphones, the e-healthcare system emerges to cope with the high demand of health services. However, this integrated smart health system is vulnerable to various attacks, including intrusion attacks. Traditional detection schemes generally lack the classifier diversity to identify attacks in complex scenarios that contain a small amount of training data. Moreover, the use of cloud-based attack detection may result in higher detection latency. In this paper, we propose an Edge-assisted Anomaly Detection (EAD) scheme to detect malicious attacks. Specifically, we first identify four types of attackers according to their attacking capabilities. To distinguish attacks from normal behaviors, we then propose a wrapper feature selection method. This selection method eliminates the impact of irrelevant and redundant features so that the detection accuracy can be improved. Moreover, we investigate the diversity of classifiers and exploit ensemble learning to improve the detection rate. To reduce high detection latency in the cloud, edge nodes are used to concurrently implement the proposed lightweight scheme. We evaluate the EAD performance based on two real-world datasets, i.e., NSL-KDD and UNSW-NB15 datasets. The simulation results show that the EAD outperforms other state-of-the-art methods in terms of accuracy, detection rate, and computational complexity. The analysis of detection time validates the fast detection of the proposed EAD compared with cloud-assisted schemes.","PeriodicalId":200641,"journal":{"name":"2021 IEEE Global Communications Conference (GLOBECOM)","volume":"36 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124697346","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Interference Cooperation based Resource Allocation in NOMA Terrestrial-Satellite Networks 基于干扰合作的NOMA地星网资源分配
Pub Date : 2021-12-01 DOI: 10.1109/GLOBECOM46510.2021.9685107
Yaomin Zhang, Haijun Zhang, Huang‐Cheng Zhou, Wei Li
In this paper, an uplink non-orthogonal multiple access (NOMA) satellite-terrestrial network is investigated, where the terrestrial base stations (BSs) can simultaneously communicate with the satellite by backhaul, and user equipments (UEs) share fronthaul spectrum resource to communicate. The communication of satellite UEs is influenced by crosstier interference caused by terrestrial cellular UEs. Thus, a utility function which consists of system achieved rate and crosstier interference is build. And we aim to maximize the utility function while satisfying the constraints of the varying backhaul rate and quality of service (QoS) of UEs. The optimization problem is decomposed into AP-UE association, bandwidth assignment, and power allocation sub-problems, and solved by proposed matching algorithm and successive convex approximation (SCA) method, respectively. The simulation results show the effectiveness of the proposed algorithm.
本文研究了一种上行链路非正交多址(NOMA)卫星-地面网络,其中地面基站(BSs)可以同时与卫星进行回程通信,用户设备(ue)共享前传频谱资源进行通信。卫星终端通信受到地面蜂窝终端产生的交叉干扰的影响。由此,建立了一个由系统成活率和交叉干扰组成的效用函数。在满足不同终端回程速率和服务质量(QoS)约束的前提下,实现效用函数的最大化。将优化问题分解为AP-UE关联子问题、带宽分配子问题和功率分配子问题,分别采用所提出的匹配算法和逐次凸逼近(SCA)方法进行求解。仿真结果表明了该算法的有效性。
{"title":"Interference Cooperation based Resource Allocation in NOMA Terrestrial-Satellite Networks","authors":"Yaomin Zhang, Haijun Zhang, Huang‐Cheng Zhou, Wei Li","doi":"10.1109/GLOBECOM46510.2021.9685107","DOIUrl":"https://doi.org/10.1109/GLOBECOM46510.2021.9685107","url":null,"abstract":"In this paper, an uplink non-orthogonal multiple access (NOMA) satellite-terrestrial network is investigated, where the terrestrial base stations (BSs) can simultaneously communicate with the satellite by backhaul, and user equipments (UEs) share fronthaul spectrum resource to communicate. The communication of satellite UEs is influenced by crosstier interference caused by terrestrial cellular UEs. Thus, a utility function which consists of system achieved rate and crosstier interference is build. And we aim to maximize the utility function while satisfying the constraints of the varying backhaul rate and quality of service (QoS) of UEs. The optimization problem is decomposed into AP-UE association, bandwidth assignment, and power allocation sub-problems, and solved by proposed matching algorithm and successive convex approximation (SCA) method, respectively. The simulation results show the effectiveness of the proposed algorithm.","PeriodicalId":200641,"journal":{"name":"2021 IEEE Global Communications Conference (GLOBECOM)","volume":"83 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130406333","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Semantic Analysis and Preference Capturing on Attentive Networks for Rating Prediction 基于关注网络的语义分析和偏好捕获用于评级预测
Pub Date : 2021-12-01 DOI: 10.1109/GLOBECOM46510.2021.9685227
Cheng-Han Chou, Bi-Ru Dai
Nowadays, people receive an enormous amount of information from day to day. However, they are only interested in information which matches their preferences. Thus, retrieving such information becomes an significant task, in our case, the reviews composed by users. Matrix Factorization (MF) based methods achieve fairly good performances on recommendation tasks. However, there exist several crucial issues with MF - based methods such as cold-start problems and data sparseness. In order to address the above issues, numerous recommendation models are proposed which obtained stellar performances. Nonetheless, we figured that there is not a more comprehensive framework that enhances its performance through retrieving user preference and item trend. Hence, we propose a novel approach to tackle the aforementioned issues. A hierarchical construction with user preference and item trend capturing is employed in this proposed framework. The performance excels in comparison to state-of-the-art models by testing on several real-world datasets. Experimental results verified that our framework can extract useful features even under sparse data.
如今,人们每天都会接收到大量的信息。然而,他们只对符合他们喜好的信息感兴趣。因此,检索这些信息成为一项重要的任务,在我们的例子中,是由用户撰写的评论。基于矩阵分解(MF)的方法在推荐任务上取得了较好的性能。然而,基于MF的方法存在一些关键问题,如冷启动问题和数据稀疏性问题。为了解决上述问题,提出了许多推荐模型,并取得了良好的效果。尽管如此,我们认为没有一个更全面的框架可以通过检索用户偏好和项目趋势来提高其性能。因此,我们提出了一种解决上述问题的新方法。该框架采用了用户偏好和项目趋势捕获的分层结构。通过在几个真实数据集上的测试,与最先进的模型相比,性能表现优异。实验结果表明,即使在稀疏数据下,我们的框架也能提取出有用的特征。
{"title":"Semantic Analysis and Preference Capturing on Attentive Networks for Rating Prediction","authors":"Cheng-Han Chou, Bi-Ru Dai","doi":"10.1109/GLOBECOM46510.2021.9685227","DOIUrl":"https://doi.org/10.1109/GLOBECOM46510.2021.9685227","url":null,"abstract":"Nowadays, people receive an enormous amount of information from day to day. However, they are only interested in information which matches their preferences. Thus, retrieving such information becomes an significant task, in our case, the reviews composed by users. Matrix Factorization (MF) based methods achieve fairly good performances on recommendation tasks. However, there exist several crucial issues with MF - based methods such as cold-start problems and data sparseness. In order to address the above issues, numerous recommendation models are proposed which obtained stellar performances. Nonetheless, we figured that there is not a more comprehensive framework that enhances its performance through retrieving user preference and item trend. Hence, we propose a novel approach to tackle the aforementioned issues. A hierarchical construction with user preference and item trend capturing is employed in this proposed framework. The performance excels in comparison to state-of-the-art models by testing on several real-world datasets. Experimental results verified that our framework can extract useful features even under sparse data.","PeriodicalId":200641,"journal":{"name":"2021 IEEE Global Communications Conference (GLOBECOM)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130522384","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Spider: Deep Learning-driven Sparse Mobile Traffic Measurement Collection and Reconstruction 蜘蛛:深度学习驱动的稀疏移动流量测量收集与重建
Pub Date : 2021-12-01 DOI: 10.1109/GLOBECOM46510.2021.9685804
Yin Fang, A. Diallo, Chaoyun Zhang, P. Patras
Data-driven mobile network management hinges on accurate traffic measurements, which routinely require expensive specialized equipment and substantial local storage capabilities, and bear high data transfer overheads. To overcome these challenges, in this paper we propose Spider, a deep-learning-driven mobile traffic measurement collection and reconstruction framework, which reduces the cost of data collection while retaining state-of-the-art accuracy in inferring mobile traffic consumption with fine geographic granularity. Spider harnesses Reinforcement Learning and tackles large action spaces to train a policy network that selectively samples a minimal number of cells where data should be collected. We further introduce a fast and accurate neural model that extracts spatiotemporal correlations from historical data to reconstruct network-wide traffic consumption based on sparse measurements. Experiments we conduct with a real-world mobile traffic dataset demonstrate that Spider samples 48% fewer cells as compared to several benchmarks considered, and yields up to 67% lower reconstruction errors than state-of-the-art interpolation methods. Moreover, our framework can adapt to previously unseen traffic patterns.
数据驱动的移动网络管理依赖于精确的流量测量,这通常需要昂贵的专用设备和大量的本地存储能力,并承担很高的数据传输开销。为了克服这些挑战,在本文中,我们提出了Spider,这是一个深度学习驱动的移动流量测量收集和重建框架,它降低了数据收集的成本,同时在以精细地理粒度推断移动流量消耗方面保持了最先进的准确性。Spider利用强化学习和处理大型动作空间来训练策略网络,该网络有选择地对应该收集数据的最小数量的单元进行采样。我们进一步引入了一种快速准确的神经模型,该模型从历史数据中提取时空相关性,以基于稀疏测量重建全网流量消耗。我们对真实世界的移动流量数据集进行的实验表明,与考虑的几个基准相比,Spider的样本单元减少了48%,并且比最先进的插值方法的重建误差降低了67%。此外,我们的框架可以适应以前看不见的流量模式。
{"title":"Spider: Deep Learning-driven Sparse Mobile Traffic Measurement Collection and Reconstruction","authors":"Yin Fang, A. Diallo, Chaoyun Zhang, P. Patras","doi":"10.1109/GLOBECOM46510.2021.9685804","DOIUrl":"https://doi.org/10.1109/GLOBECOM46510.2021.9685804","url":null,"abstract":"Data-driven mobile network management hinges on accurate traffic measurements, which routinely require expensive specialized equipment and substantial local storage capabilities, and bear high data transfer overheads. To overcome these challenges, in this paper we propose Spider, a deep-learning-driven mobile traffic measurement collection and reconstruction framework, which reduces the cost of data collection while retaining state-of-the-art accuracy in inferring mobile traffic consumption with fine geographic granularity. Spider harnesses Reinforcement Learning and tackles large action spaces to train a policy network that selectively samples a minimal number of cells where data should be collected. We further introduce a fast and accurate neural model that extracts spatiotemporal correlations from historical data to reconstruct network-wide traffic consumption based on sparse measurements. Experiments we conduct with a real-world mobile traffic dataset demonstrate that Spider samples 48% fewer cells as compared to several benchmarks considered, and yields up to 67% lower reconstruction errors than state-of-the-art interpolation methods. Moreover, our framework can adapt to previously unseen traffic patterns.","PeriodicalId":200641,"journal":{"name":"2021 IEEE Global Communications Conference (GLOBECOM)","volume":"57 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130583027","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Trade-offs in large blockchain-based IoT system design 基于区块链的大型物联网系统设计中的权衡
Pub Date : 2021-12-01 DOI: 10.1109/GLOBECOM46510.2021.9685119
J. Misic, V. Mišić, Xiaolin Chang
The well known Practical Byzantine Fault Tolerance (PBFT) consensus algorithm is not well suited to blockchain-based Internet of Things (IoT) systems which cover large geographical areas. To reduce queuing delays and eliminates a permanent leader as a single point of failure, we use a multiple entry, multi-tier PBFT architecture and investigate the distribution of orderers that will lead to minimization of the total delay from the reception of a block of IoT data to the moment it is linked to the global blockchain. Our results indicate that the total number of orderers for given system coverage and total load are main determinants of the block linking time. We show that, given the dimensions of an area and the number of orderers, partitioning the orderers into a smaller number of tiers with more clusters will lead to lower block linking time. These observations may be used in the process of planning and dimensioning of multi-tier cluster architectures for blockchain-enabled IoT systems.
众所周知的实用拜占庭容错(PBFT)共识算法不太适合覆盖大地理区域的基于区块链的物联网(IoT)系统。为了减少排队延迟并消除永久领导者作为单点故障,我们使用了多入口、多层PBFT架构,并研究了订单的分布,这将导致从接收物联网数据块到链接到全球区块链的总延迟最小化。我们的研究结果表明,给定系统覆盖率和总负荷的总订货数是块连接时间的主要决定因素。我们证明,给定一个区域的维度和排序者的数量,将排序者划分为更小数量的层和更多的簇将导致更低的块连接时间。这些观察结果可用于支持区块链的物联网系统的多层集群架构的规划和维度划分过程。
{"title":"Trade-offs in large blockchain-based IoT system design","authors":"J. Misic, V. Mišić, Xiaolin Chang","doi":"10.1109/GLOBECOM46510.2021.9685119","DOIUrl":"https://doi.org/10.1109/GLOBECOM46510.2021.9685119","url":null,"abstract":"The well known Practical Byzantine Fault Tolerance (PBFT) consensus algorithm is not well suited to blockchain-based Internet of Things (IoT) systems which cover large geographical areas. To reduce queuing delays and eliminates a permanent leader as a single point of failure, we use a multiple entry, multi-tier PBFT architecture and investigate the distribution of orderers that will lead to minimization of the total delay from the reception of a block of IoT data to the moment it is linked to the global blockchain. Our results indicate that the total number of orderers for given system coverage and total load are main determinants of the block linking time. We show that, given the dimensions of an area and the number of orderers, partitioning the orderers into a smaller number of tiers with more clusters will lead to lower block linking time. These observations may be used in the process of planning and dimensioning of multi-tier cluster architectures for blockchain-enabled IoT systems.","PeriodicalId":200641,"journal":{"name":"2021 IEEE Global Communications Conference (GLOBECOM)","volume":"107 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130646215","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Dual-Net for Joint Channel Estimation and Data Recovery in Grant-free Massive Access 双网联合信道估计与无授权海量接入数据恢复
Pub Date : 2021-12-01 DOI: 10.1109/GLOBECOM46510.2021.9685696
Yanna Bai, Wei Chen, Yuan Ma, Ning Wang, Bo Ai
In massive machine-type communications (mMTC), the conflict between millions of potential access devices and limited channel freedom leads to a sharp decrease in spectral efficiency. The sparse nature of mMTC provides a solution by using compressive sensing (CS) to perform multiuser detection (MUD) but suffers conflict between the high computation complexity and low latency requirements. In this paper, we propose a novel Dual-network for joint channel estimation and data recovery. The proposed Dual-Net utilizes the sparse consistency between the channel vector and data matrix of all users. Experimental results show that the proposed Dual-Net outperforms existing CS algorithms and general neural networks in computation complexity and accuracy, which means reduced access delay and more supported devices.
在大规模机器通信(mMTC)中,数以百万计的潜在接入设备和有限的信道自由之间的冲突导致频谱效率急剧下降。mMTC的稀疏特性提供了一种利用压缩感知(CS)执行多用户检测(MUD)的解决方案,但在高计算复杂度和低延迟需求之间存在冲突。本文提出了一种用于联合信道估计和数据恢复的新型双网络。所提出的双网利用了所有用户的信道向量和数据矩阵之间的稀疏一致性。实验结果表明,所提出的双网在计算复杂度和精度上都优于现有的CS算法和一般神经网络,减少了访问延迟,支持的设备更多。
{"title":"Dual-Net for Joint Channel Estimation and Data Recovery in Grant-free Massive Access","authors":"Yanna Bai, Wei Chen, Yuan Ma, Ning Wang, Bo Ai","doi":"10.1109/GLOBECOM46510.2021.9685696","DOIUrl":"https://doi.org/10.1109/GLOBECOM46510.2021.9685696","url":null,"abstract":"In massive machine-type communications (mMTC), the conflict between millions of potential access devices and limited channel freedom leads to a sharp decrease in spectral efficiency. The sparse nature of mMTC provides a solution by using compressive sensing (CS) to perform multiuser detection (MUD) but suffers conflict between the high computation complexity and low latency requirements. In this paper, we propose a novel Dual-network for joint channel estimation and data recovery. The proposed Dual-Net utilizes the sparse consistency between the channel vector and data matrix of all users. Experimental results show that the proposed Dual-Net outperforms existing CS algorithms and general neural networks in computation complexity and accuracy, which means reduced access delay and more supported devices.","PeriodicalId":200641,"journal":{"name":"2021 IEEE Global Communications Conference (GLOBECOM)","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123975674","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Caching Assisted Correlated Task Offloading for IoT Devices in Mobile Edge Computing 移动边缘计算中物联网设备的缓存辅助相关任务卸载
Pub Date : 2021-12-01 DOI: 10.1109/GLOBECOM46510.2021.9685828
Chaogang Tang, Chunsheng Zhu, Huaming Wu, Chunyan Liu, J. Rodrigues
The fast-growing Internet of Thing (IoT) has generated a vast number of tasks which need to be performed efficiently. Owing to the drawback of the sensor-to-cloud computing paradigm in IoT, mobile edge computing (MEC) has become a hot topic recently. Against this backdrop, we focus on the offloading of tasks characterized by intrinsic correlations in this paper, which have not been considered in most of existing works. For the sequential arrival of such correlated tasks, the future workload can be efficiently reduced by caching the current computational result. Specifically, we resort to the Lyapunov optimization to handle the long-term constraint on energy consumption. Simulation results reveal that our approach is superior to other approaches in the optimization of response latency and energy consumption.
快速发展的物联网(IoT)产生了大量需要高效执行的任务。由于物联网中传感器到云计算模式的缺陷,移动边缘计算(MEC)成为近年来的热门话题。在此背景下,我们将重点关注以内在相关性为特征的任务卸载,这在大多数现有工作中都没有被考虑到。对于这些相关任务的顺序到达,可以通过缓存当前的计算结果来有效地减少未来的工作负载。具体而言,我们采用Lyapunov优化来处理能源消耗的长期约束。仿真结果表明,该方法在响应延迟和能耗优化方面优于其他方法。
{"title":"Caching Assisted Correlated Task Offloading for IoT Devices in Mobile Edge Computing","authors":"Chaogang Tang, Chunsheng Zhu, Huaming Wu, Chunyan Liu, J. Rodrigues","doi":"10.1109/GLOBECOM46510.2021.9685828","DOIUrl":"https://doi.org/10.1109/GLOBECOM46510.2021.9685828","url":null,"abstract":"The fast-growing Internet of Thing (IoT) has generated a vast number of tasks which need to be performed efficiently. Owing to the drawback of the sensor-to-cloud computing paradigm in IoT, mobile edge computing (MEC) has become a hot topic recently. Against this backdrop, we focus on the offloading of tasks characterized by intrinsic correlations in this paper, which have not been considered in most of existing works. For the sequential arrival of such correlated tasks, the future workload can be efficiently reduced by caching the current computational result. Specifically, we resort to the Lyapunov optimization to handle the long-term constraint on energy consumption. Simulation results reveal that our approach is superior to other approaches in the optimization of response latency and energy consumption.","PeriodicalId":200641,"journal":{"name":"2021 IEEE Global Communications Conference (GLOBECOM)","volume":"287 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121285360","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
CNN-Based Signal Detector for IM-OFDMA 基于cnn的IM-OFDMA信号检测器
Pub Date : 2021-12-01 DOI: 10.1109/GLOBECOM46510.2021.9685285
Özgür Alaca, S. Althunibat, Serhan Yarkan, Scott L. Miller, K. Qaraqe
The recently proposed index modulation-based up-link orthogonal frequency division multiple access (IM-OFDMA) scheme has outperformed the conventional schemes in terms of spectral efficiency and error performance. However, the induced computational complexity at the receiver forms a bottleneck in real-time implementation due to the joint detection of all users. In this paper, based on deep learning principles, a convolutional neural network (CNN)-based signal detector is proposed for data detection in IM-OFDMA systems instead of the optimum Maximum Likelihood (ML) detector. A CNN-based detector is constructed with the created dataset of the IM-OFDMA transmission by offline training. Then, the convolutional neural network (CNN)-based detector is directly applied to the IM-OFMDA communication scheme to detect the transmitted signal by treating the received signal and channel state information (CSI) as inputs. The proposed CNN-based detector is able to reduce the order of the computational complexity from O(n2n) to O(n2) as compared to the ML detector with a slight impact on the error performance.
最近提出的基于索引调制的上行链路正交频分多址(IM-OFDMA)方案在频谱效率和误差性能方面都优于传统方案。然而,由于对所有用户进行联合检测,接收端产生的计算复杂性成为实时实现的瓶颈。本文基于深度学习原理,提出了一种基于卷积神经网络(CNN)的信号检测器,用于IM-OFDMA系统中的数据检测,而不是最优的最大似然(ML)检测器。利用离线训练生成的IM-OFDMA传输数据集构建基于cnn的检测器。然后,将基于卷积神经网络(CNN)的检测器直接应用到IM-OFMDA通信方案中,以接收到的信号和信道状态信息(CSI)作为输入,对发射信号进行检测。与ML检测器相比,本文提出的基于cnn的检测器能够将计算复杂度从O(n2n)降低到O(n2),并且对误差性能影响较小。
{"title":"CNN-Based Signal Detector for IM-OFDMA","authors":"Özgür Alaca, S. Althunibat, Serhan Yarkan, Scott L. Miller, K. Qaraqe","doi":"10.1109/GLOBECOM46510.2021.9685285","DOIUrl":"https://doi.org/10.1109/GLOBECOM46510.2021.9685285","url":null,"abstract":"The recently proposed index modulation-based up-link orthogonal frequency division multiple access (IM-OFDMA) scheme has outperformed the conventional schemes in terms of spectral efficiency and error performance. However, the induced computational complexity at the receiver forms a bottleneck in real-time implementation due to the joint detection of all users. In this paper, based on deep learning principles, a convolutional neural network (CNN)-based signal detector is proposed for data detection in IM-OFDMA systems instead of the optimum Maximum Likelihood (ML) detector. A CNN-based detector is constructed with the created dataset of the IM-OFDMA transmission by offline training. Then, the convolutional neural network (CNN)-based detector is directly applied to the IM-OFMDA communication scheme to detect the transmitted signal by treating the received signal and channel state information (CSI) as inputs. The proposed CNN-based detector is able to reduce the order of the computational complexity from O(n2n) to O(n2) as compared to the ML detector with a slight impact on the error performance.","PeriodicalId":200641,"journal":{"name":"2021 IEEE Global Communications Conference (GLOBECOM)","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114197294","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Thermal Profiling by WiFi Sensing in IoT Networks 物联网网络中WiFi传感的热分析
Pub Date : 2021-12-01 DOI: 10.1109/GLOBECOM46510.2021.9686022
Junye Li, Aryan Sharma, Deepak Mishra, Aruna Seneviratne
Extensive literature has shown the possibility of using WiFi to sense large scale environmental features such as people, movement, and human gestures. To our best knowledge, there has been no investigation on identifying the microscopic changes in a channel due to atmospheric temperature variations. We identify this as a real world use case, since there are scenarios such as Data Centres where WiFi traffic is omnipresent and temperature monitoring is important. We develop a framework for sensing temperature using WiFi Channel State Information (CSI), proposing that the increased kinetic energy of ambient gas particles will affect the wireless link. To validate this, our paper uses low wavelength 5GHz WiFi CSI from commodity hardware to measure how the channel changes as the ambient temperature is raised. Empirically, we demonstrate that the CSI amplitude value drops at a rate of 13 per degree Celsius rise in the ambient temperature based on the testing platform, and developed regressions models with ± 1°C accuracy in the majority of cases. Moreover, we have shown that WiFi subcarriers exhibit a frequency-selective behaviour in their varying responses to the rise in ambient temperature.
大量文献表明,使用WiFi来感知大规模环境特征(如人、运动和人类手势)是可能的。据我们所知,还没有关于确定由于大气温度变化而引起的水道微观变化的研究。我们认为这是一个现实世界的用例,因为在数据中心等场景中WiFi流量无处不在,温度监测很重要。我们开发了一个使用WiFi通道状态信息(CSI)感知温度的框架,提出环境气体颗粒动能的增加会影响无线链路。为了验证这一点,我们的论文使用来自商品硬件的低波长5GHz WiFi CSI来测量信道如何随着环境温度的升高而变化。通过实验,我们证明了基于测试平台的CSI振幅值以环境温度每升高1摄氏度的速率下降13,并开发了在大多数情况下精度为±1摄氏度的回归模型。此外,我们已经证明WiFi子载波在其对环境温度升高的不同响应中表现出频率选择性行为。
{"title":"Thermal Profiling by WiFi Sensing in IoT Networks","authors":"Junye Li, Aryan Sharma, Deepak Mishra, Aruna Seneviratne","doi":"10.1109/GLOBECOM46510.2021.9686022","DOIUrl":"https://doi.org/10.1109/GLOBECOM46510.2021.9686022","url":null,"abstract":"Extensive literature has shown the possibility of using WiFi to sense large scale environmental features such as people, movement, and human gestures. To our best knowledge, there has been no investigation on identifying the microscopic changes in a channel due to atmospheric temperature variations. We identify this as a real world use case, since there are scenarios such as Data Centres where WiFi traffic is omnipresent and temperature monitoring is important. We develop a framework for sensing temperature using WiFi Channel State Information (CSI), proposing that the increased kinetic energy of ambient gas particles will affect the wireless link. To validate this, our paper uses low wavelength 5GHz WiFi CSI from commodity hardware to measure how the channel changes as the ambient temperature is raised. Empirically, we demonstrate that the CSI amplitude value drops at a rate of 13 per degree Celsius rise in the ambient temperature based on the testing platform, and developed regressions models with ± 1°C accuracy in the majority of cases. Moreover, we have shown that WiFi subcarriers exhibit a frequency-selective behaviour in their varying responses to the rise in ambient temperature.","PeriodicalId":200641,"journal":{"name":"2021 IEEE Global Communications Conference (GLOBECOM)","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114208849","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
期刊
2021 IEEE Global Communications Conference (GLOBECOM)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1