首页 > 最新文献

IEEE Transactions on Machine Learning in Communications and Networking最新文献

英文 中文
Sybil Attack Detection Based on Signal Clustering in Vehicular Networks 基于车载网络信号聚类的仿冒攻击检测
Pub Date : 2024-06-05 DOI: 10.1109/TMLCN.2024.3410208
Halit Bugra Tulay;Can Emre Koksal
With the growing adoption of vehicular networks, ensuring the security of these networks is becoming increasingly crucial. However, the broadcast nature of communication in these networks creates numerous privacy and security concerns. In particular, the Sybil attack, where attackers can use multiple identities to disseminate false messages, cause service delays, or gain control of the network, poses a significant threat. To combat this attack, we propose a novel approach utilizing the channel state information (CSI) of vehicles. Our approach leverages the distinct spatio-temporal variations of CSI samples obtained in vehicular communication signals to detect these attacks. We conduct extensive real-world experiments using vehicle-to-everything (V2X) data, gathered from dedicated short-range communications (DSRC) in vehicular networks. Our results demonstrate a high detection rate of over 98% in the real-world experiments, showcasing the practicality and effectiveness of our method in realistic vehicular scenarios. Furthermore, we rigorously test our approach through advanced ray-tracing simulations in urban environments, which demonstrates high efficacy even in complex scenarios involving various vehicles. This makes our approach a valuable, hardware-independent solution for the V2X technologies at major intersections.
随着车载网络的日益普及,确保这些网络的安全变得越来越重要。然而,这些网络通信的广播性质造成了许多隐私和安全问题。尤其是Sybil攻击,攻击者可以利用多重身份传播虚假信息、造成服务延迟或获得网络控制权,这种攻击构成了重大威胁。为了应对这种攻击,我们提出了一种利用车辆信道状态信息(CSI)的新方法。我们的方法利用从车辆通信信号中获取的 CSI 样本的独特时空变化来检测这些攻击。我们利用车辆网络中专用短程通信(DSRC)收集的车对物(V2X)数据进行了大量实际实验。我们的结果表明,在真实世界的实验中,我们的方法具有 98% 以上的高检测率,展示了我们的方法在现实车辆场景中的实用性和有效性。此外,我们还在城市环境中通过先进的光线追踪模拟对我们的方法进行了严格测试,结果表明即使在涉及各种车辆的复杂场景中,我们的方法也能发挥很高的功效。这使得我们的方法成为主要交叉路口 V2X 技术的一种有价值的、独立于硬件的解决方案。
{"title":"Sybil Attack Detection Based on Signal Clustering in Vehicular Networks","authors":"Halit Bugra Tulay;Can Emre Koksal","doi":"10.1109/TMLCN.2024.3410208","DOIUrl":"https://doi.org/10.1109/TMLCN.2024.3410208","url":null,"abstract":"With the growing adoption of vehicular networks, ensuring the security of these networks is becoming increasingly crucial. However, the broadcast nature of communication in these networks creates numerous privacy and security concerns. In particular, the Sybil attack, where attackers can use multiple identities to disseminate false messages, cause service delays, or gain control of the network, poses a significant threat. To combat this attack, we propose a novel approach utilizing the channel state information (CSI) of vehicles. Our approach leverages the distinct spatio-temporal variations of CSI samples obtained in vehicular communication signals to detect these attacks. We conduct extensive real-world experiments using vehicle-to-everything (V2X) data, gathered from dedicated short-range communications (DSRC) in vehicular networks. Our results demonstrate a high detection rate of over 98% in the real-world experiments, showcasing the practicality and effectiveness of our method in realistic vehicular scenarios. Furthermore, we rigorously test our approach through advanced ray-tracing simulations in urban environments, which demonstrates high efficacy even in complex scenarios involving various vehicles. This makes our approach a valuable, hardware-independent solution for the V2X technologies at major intersections.","PeriodicalId":100641,"journal":{"name":"IEEE Transactions on Machine Learning in Communications and Networking","volume":"2 ","pages":"753-765"},"PeriodicalIF":0.0,"publicationDate":"2024-06-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10550012","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141319672","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Decentralized Aggregation for Energy-Efficient Federated Learning in mmWave Aerial-Terrestrial Integrated Networks 在毫米波空地一体化网络中进行分散聚合以实现高能效的联合学习
Pub Date : 2024-06-05 DOI: 10.1109/TMLCN.2024.3410211
Mohammed Saif;Md. Zoheb Hassan;Md. Jahangir Hossain
It is anticipated that aerial-terrestrial integrated networks incorporating unmanned aerial vehicles (UAVs) mounted relays will offer improved coverage and connectivity in the beyond 5G era. Meanwhile, federated learning (FL) is a promising distributed machine learning technique for building inference models over wireless networks due to its ability to maintain user privacy and reduce communication overhead. However, off-the-shelf FL models aggregate global parameters at a central parameter server (CPS), increasing energy consumption and latency, as well as inefficiently utilizing radio resource blocks (RRBs) for distributed user devices (UDs). This paper presents a resource-efficient and decentralized FL framework called FedMoD (federated learning with model dissemination), for millimeter-wave (mmWave) aerial-terrestrial integrated networks with the following two unique characteristics. Firstly, FedMoD incorporates a novel decentralized model dissemination scheme that uses UAVs as local model aggregators through UAV-to-UAV and device-to-device (D2D) communications. As a result, FedMoD 1) increases the number of participant UDs in developing the FL model; and 2) achieves global model aggregation without involving CPS. Secondly, FedMoD reduces FL’s energy consumption using radio resource management (RRM) under the constraints of over-the-air learning latency. To achieve this, by leveraging graph theory, FedMoD optimizes the scheduling of line-of-sight (LOS) UDs to suitable UAVs and RRBs over mmWave links and non-LOS UDs to available LOS UDs via overlay D2D communications. Extensive simulations reveal that FedMoD, despite being decentralized, offers the same convergence performance to the conventional centralized FL frameworks.
预计在 5G 时代之后,结合无人机(UAV)安装中继器的空地一体化网络将提供更好的覆盖和连接。同时,由于联合学习(FL)能够维护用户隐私并减少通信开销,因此是一种很有前途的分布式机器学习技术,可用于在无线网络上建立推理模型。然而,现成的联合学习模型在中央参数服务器(CPS)上汇集全局参数,增加了能耗和延迟,并且不能有效利用分布式用户设备(UD)的无线电资源块(RRB)。本文针对毫米波(mmWave)空地一体化网络提出了一种资源节约型分散式 FL 框架,称为 FedMoD(带模型传播的联合学习),具有以下两个独特之处。首先,FedMoD 采用了新颖的分散式模型传播方案,通过无人机对无人机和设备对设备(D2D)通信,将无人机用作本地模型聚合器。因此,FedMoD 1) 增加了参与开发 FL 模型的 UD 数量;2) 在不涉及 CPS 的情况下实现了全球模型聚合。其次,在空中学习延迟的限制下,FedMoD 利用无线电资源管理(RRM)降低了 FL 的能耗。为此,FedMoD 利用图论,通过毫米波链路将视距(LOS)UD 优化调度到合适的无人机和 RRB,并通过叠加 D2D 通信将非视距 UD 优化调度到可用的 LOS UD。大量模拟显示,尽管 FedMoD 是分散式的,但其收敛性能与传统的集中式 FL 框架相同。
{"title":"Decentralized Aggregation for Energy-Efficient Federated Learning in mmWave Aerial-Terrestrial Integrated Networks","authors":"Mohammed Saif;Md. Zoheb Hassan;Md. Jahangir Hossain","doi":"10.1109/TMLCN.2024.3410211","DOIUrl":"https://doi.org/10.1109/TMLCN.2024.3410211","url":null,"abstract":"It is anticipated that aerial-terrestrial integrated networks incorporating unmanned aerial vehicles (UAVs) mounted relays will offer improved coverage and connectivity in the beyond 5G era. Meanwhile, federated learning (FL) is a promising distributed machine learning technique for building inference models over wireless networks due to its ability to maintain user privacy and reduce communication overhead. However, off-the-shelf FL models aggregate global parameters at a central parameter server (CPS), increasing energy consumption and latency, as well as inefficiently utilizing radio resource blocks (RRBs) for distributed user devices (UDs). This paper presents a resource-efficient and decentralized FL framework called FedMoD (federated learning with model dissemination), for millimeter-wave (mmWave) aerial-terrestrial integrated networks with the following two unique characteristics. Firstly, FedMoD incorporates a novel decentralized model dissemination scheme that uses UAVs as local model aggregators through UAV-to-UAV and device-to-device (D2D) communications. As a result, FedMoD 1) increases the number of participant UDs in developing the FL model; and 2) achieves global model aggregation without involving CPS. Secondly, FedMoD reduces FL’s energy consumption using radio resource management (RRM) under the constraints of over-the-air learning latency. To achieve this, by leveraging graph theory, FedMoD optimizes the scheduling of line-of-sight (LOS) UDs to suitable UAVs and RRBs over mmWave links and non-LOS UDs to available LOS UDs via overlay D2D communications. Extensive simulations reveal that FedMoD, despite being decentralized, offers the same convergence performance to the conventional centralized FL frameworks.","PeriodicalId":100641,"journal":{"name":"IEEE Transactions on Machine Learning in Communications and Networking","volume":"2 ","pages":"1283-1304"},"PeriodicalIF":0.0,"publicationDate":"2024-06-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10550002","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142230893","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
DFL: Dynamic Federated Split Learning in Heterogeneous IoT DFL:异构物联网中的动态联合拆分学习
Pub Date : 2024-06-04 DOI: 10.1109/TMLCN.2024.3409205
Eric Samikwa;Antonio Di Maio;Torsten Braun
Federated Learning (FL) in edge Internet of Things (IoT) environments is challenging due to the heterogeneous nature of the learning environment, mainly embodied in two aspects. Firstly, the statistically heterogeneous data, usually non-independent identically distributed (non-IID), from geographically distributed clients can deteriorate the FL training accuracy. Secondly, the heterogeneous computing and communication resources in IoT devices often result in unstable training processes that slow down the training of a global model and affect energy consumption. Most existing solutions address only the unilateral side of the heterogeneity issue but neglect the joint problem of resources and data heterogeneity for the resource-constrained IoT. In this article, we propose Dynamic Federated split Learning (DFL) to address the joint problem of data and resource heterogeneity for distributed training in IoT. DFL enhances training efficiency in heterogeneous dynamic IoT through resource-aware split computing of deep neural networks and dynamic clustering of training participants based on the similarity of their sub-model layers. We evaluate DFL on a real testbed comprising heterogeneous IoT devices using two widely-adopted datasets, in various non-IID settings. Results show that DFL improves training performance in terms of training time by up to 48%, accuracy by up to 32%, and energy consumption by up to 62.8% compared to classic FL and Federated Split Learning in scenarios with both data and resource heterogeneity.
由于学习环境的异构性,边缘物联网(IoT)环境中的联合学习(FL)具有挑战性,主要体现在两个方面。首先,来自地理位置分散的客户端的统计异构数据(通常是非独立同分布(non-IID)数据)会降低集群学习的训练精度。其次,物联网设备中的异构计算和通信资源往往会导致训练过程不稳定,从而减慢全局模型的训练速度并影响能耗。现有的大多数解决方案只解决了异构问题的单方面,却忽视了资源受限的物联网的资源和数据异构的共同问题。在本文中,我们提出了动态联邦分裂学习(DFL)来解决物联网分布式训练中数据和资源异构的共同问题。DFL 通过对深度神经网络进行资源感知的拆分计算,并根据子模型层的相似性对训练参与者进行动态聚类,提高了异构动态物联网中的训练效率。我们在一个由异构物联网设备组成的真实测试平台上,使用两个广泛采用的数据集,在各种非 IID 设置下对 DFL 进行了评估。结果表明,在数据和资源异构的情况下,DFL 与传统的 FL 和联邦拆分学习相比,训练时间最多可缩短 48%,准确率最多可提高 32%,能耗最多可降低 62.8%。
{"title":"DFL: Dynamic Federated Split Learning in Heterogeneous IoT","authors":"Eric Samikwa;Antonio Di Maio;Torsten Braun","doi":"10.1109/TMLCN.2024.3409205","DOIUrl":"https://doi.org/10.1109/TMLCN.2024.3409205","url":null,"abstract":"Federated Learning (FL) in edge Internet of Things (IoT) environments is challenging due to the heterogeneous nature of the learning environment, mainly embodied in two aspects. Firstly, the statistically heterogeneous data, usually non-independent identically distributed (non-IID), from geographically distributed clients can deteriorate the FL training accuracy. Secondly, the heterogeneous computing and communication resources in IoT devices often result in unstable training processes that slow down the training of a global model and affect energy consumption. Most existing solutions address only the unilateral side of the heterogeneity issue but neglect the joint problem of resources and data heterogeneity for the resource-constrained IoT. In this article, we propose Dynamic Federated split Learning (DFL) to address the joint problem of data and resource heterogeneity for distributed training in IoT. DFL enhances training efficiency in heterogeneous dynamic IoT through resource-aware split computing of deep neural networks and dynamic clustering of training participants based on the similarity of their sub-model layers. We evaluate DFL on a real testbed comprising heterogeneous IoT devices using two widely-adopted datasets, in various non-IID settings. Results show that DFL improves training performance in terms of training time by up to 48%, accuracy by up to 32%, and energy consumption by up to 62.8% compared to classic FL and Federated Split Learning in scenarios with both data and resource heterogeneity.","PeriodicalId":100641,"journal":{"name":"IEEE Transactions on Machine Learning in Communications and Networking","volume":"2 ","pages":"733-752"},"PeriodicalIF":0.0,"publicationDate":"2024-06-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10547401","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141319638","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Reinforcement Learning for Robust Header Compression (ROHC) Under Model Uncertainty 模型不确定情况下鲁棒性标题压缩(ROHC)的强化学习
Pub Date : 2024-06-04 DOI: 10.1109/TMLCN.2024.3409200
Shusen Jing;Songyang Zhang;Zhi Ding
Robust header compression (ROHC), critically positioned between network and MAC layers, plays an important role in modern wireless communication networks for improving data efficiency. This work investigates bi-directional ROHC (BD-ROHC) integrated with a novel architecture of reinforcement learning (RL). We formulate a partially observable Markov decision process (POMDP), where the compressor is the POMDP agent, and the environment consists of the decompressor, channel, and header source. Our work adopts the well-known deep Q-network (DQN), which takes the history of actions and observations as inputs, and outputs the Q-values of corresponding actions. Compared with the ideal dynamic programming (DP) proposed in existing works, the newly proposed method is scalable to the state, action, and observation spaces. In contrast, DP often incurs formidable computation costs when the number of states becomes large due to long decompressor feedback delays and complex channel models. In addition, the new method does not require prior knowledge of the transition dynamics and accurate observation dependency of the model, which are often unavailable in practical applications.
稳健报头压缩(ROHC)位于网络层和 MAC 层之间,在现代无线通信网络中发挥着提高数据效率的重要作用。本研究将双向 ROHC(BD-ROHC)与强化学习(RL)的新架构相结合。我们提出了一个部分可观测马尔可夫决策过程(POMDP),其中压缩器是 POMDP 代理,环境由解压缩器、信道和头源组成。我们的工作采用了著名的深度 Q 网络(DQN),它将行动和观察的历史记录作为输入,并输出相应行动的 Q 值。与现有工作中提出的理想动态编程(DP)相比,新提出的方法可扩展到状态、行动和观察空间。相比之下,当状态数量变多时,由于解压反馈延迟较长和信道模型复杂,DP 通常会产生巨大的计算成本。此外,新方法不需要预先了解过渡动态和模型的精确观测依赖性,而这些在实际应用中往往是不具备的。
{"title":"Reinforcement Learning for Robust Header Compression (ROHC) Under Model Uncertainty","authors":"Shusen Jing;Songyang Zhang;Zhi Ding","doi":"10.1109/TMLCN.2024.3409200","DOIUrl":"https://doi.org/10.1109/TMLCN.2024.3409200","url":null,"abstract":"Robust header compression (ROHC), critically positioned between network and MAC layers, plays an important role in modern wireless communication networks for improving data efficiency. This work investigates bi-directional ROHC (BD-ROHC) integrated with a novel architecture of reinforcement learning (RL). We formulate a partially observable Markov decision process (POMDP), where the compressor is the POMDP agent, and the environment consists of the decompressor, channel, and header source. Our work adopts the well-known deep Q-network (DQN), which takes the history of actions and observations as inputs, and outputs the Q-values of corresponding actions. Compared with the ideal dynamic programming (DP) proposed in existing works, the newly proposed method is scalable to the state, action, and observation spaces. In contrast, DP often incurs formidable computation costs when the number of states becomes large due to long decompressor feedback delays and complex channel models. In addition, the new method does not require prior knowledge of the transition dynamics and accurate observation dependency of the model, which are often unavailable in practical applications.","PeriodicalId":100641,"journal":{"name":"IEEE Transactions on Machine Learning in Communications and Networking","volume":"2 ","pages":"1033-1044"},"PeriodicalIF":0.0,"publicationDate":"2024-06-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10547320","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141725562","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Graph Neural Networks Approach for Joint Wireless Power Control and Spectrum Allocation 用于联合无线功率控制和频谱分配的图神经网络方法
Pub Date : 2024-06-03 DOI: 10.1109/TMLCN.2024.3408723
Maher Marwani;Georges Kaddoum
The proliferation of wireless technologies and the escalating performance requirements of wireless applications have led to diverse and dynamic wireless environments, presenting formidable challenges to existing radio resource management (RRM) frameworks. Researchers have proposed utilizing deep learning (DL) models to address these challenges to learn patterns from wireless data and leverage the extracted information to resolve multiple RRM tasks, such as channel allocation and power control. However, it is noteworthy that the majority of existing DL architectures are designed to operate on Euclidean data, thereby disregarding a substantial amount of information about the topological structure of wireless networks. As a result, the performance of DL models may be suboptimal when applied to wireless environments due to the failure to capture the network’s non-Euclidean geometry. This study presents a novel approach to address the challenge of power control and spectrum allocation in an N-link interference environment with shared channels, utilizing a graph neural network (GNN) based framework. In this type of wireless environment, the available bandwidth can be divided into blocks, offering greater flexibility in allocating bandwidth to communication links, but also requiring effective management of interference. One potential solution to mitigate the impact of interference is to control the transmission power of each link while ensuring the network’s data rate performance. Therefore, the power control and spectrum allocation problems are inherently coupled and should be solved jointly. The proposed GNN-based framework presents a promising avenue for tackling this complex challenge. Our experimental results demonstrate that our proposed approach yields significant improvements compared to other existing methods in terms of convergence, generalization, performance, and robustness, particularly in the context of an imperfect channel.
无线技术的普及和无线应用对性能要求的不断提高,导致了无线环境的多样化和动态化,给现有的无线资源管理(RRM)框架带来了严峻的挑战。研究人员提出利用深度学习(DL)模型来应对这些挑战,从无线数据中学习模式,并利用提取的信息来解决多种 RRM 任务,如信道分配和功率控制。然而,值得注意的是,大多数现有的深度学习架构都是针对欧几里得数据设计的,因此忽略了大量有关无线网络拓扑结构的信息。因此,在无线环境中应用 DL 模型时,由于无法捕捉网络的非欧几里得几何结构,其性能可能无法达到最佳。本研究提出了一种新方法,利用基于图神经网络(GNN)的框架来解决共享信道的 N 链路干扰环境中的功率控制和频谱分配难题。在这类无线环境中,可用带宽可被划分为多个区块,从而为通信链路的带宽分配提供了更大的灵活性,但同时也要求对干扰进行有效管理。减轻干扰影响的一个潜在解决方案是控制每个链路的传输功率,同时确保网络的数据速率性能。因此,功率控制和频谱分配问题本质上是耦合的,应联合解决。所提出的基于 GNN 的框架为应对这一复杂挑战提供了一个很有前景的途径。我们的实验结果表明,与其他现有方法相比,我们提出的方法在收敛性、泛化、性能和鲁棒性方面都有显著改进,尤其是在信道不完善的情况下。
{"title":"Graph Neural Networks Approach for Joint Wireless Power Control and Spectrum Allocation","authors":"Maher Marwani;Georges Kaddoum","doi":"10.1109/TMLCN.2024.3408723","DOIUrl":"https://doi.org/10.1109/TMLCN.2024.3408723","url":null,"abstract":"The proliferation of wireless technologies and the escalating performance requirements of wireless applications have led to diverse and dynamic wireless environments, presenting formidable challenges to existing radio resource management (RRM) frameworks. Researchers have proposed utilizing deep learning (DL) models to address these challenges to learn patterns from wireless data and leverage the extracted information to resolve multiple RRM tasks, such as channel allocation and power control. However, it is noteworthy that the majority of existing DL architectures are designed to operate on Euclidean data, thereby disregarding a substantial amount of information about the topological structure of wireless networks. As a result, the performance of DL models may be suboptimal when applied to wireless environments due to the failure to capture the network’s non-Euclidean geometry. This study presents a novel approach to address the challenge of power control and spectrum allocation in an N-link interference environment with shared channels, utilizing a graph neural network (GNN) based framework. In this type of wireless environment, the available bandwidth can be divided into blocks, offering greater flexibility in allocating bandwidth to communication links, but also requiring effective management of interference. One potential solution to mitigate the impact of interference is to control the transmission power of each link while ensuring the network’s data rate performance. Therefore, the power control and spectrum allocation problems are inherently coupled and should be solved jointly. The proposed GNN-based framework presents a promising avenue for tackling this complex challenge. Our experimental results demonstrate that our proposed approach yields significant improvements compared to other existing methods in terms of convergence, generalization, performance, and robustness, particularly in the context of an imperfect channel.","PeriodicalId":100641,"journal":{"name":"IEEE Transactions on Machine Learning in Communications and Networking","volume":"2 ","pages":"717-732"},"PeriodicalIF":0.0,"publicationDate":"2024-06-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10545547","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141298352","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Data-Driven Energy Efficiency Modeling in Large-Scale Networks: An Expert Knowledge and ML-Based Approach 大规模网络中数据驱动的能效建模:基于专家知识和 ML 的方法
Pub Date : 2024-06-03 DOI: 10.1109/TMLCN.2024.3407691
David López-Pérez;Antonio De Domenico;Nicola Piovesan;Mérouane Debbah
The energy consumption of mobile networks poses a critical challenge. Mitigating this concern necessitates the deployment and optimization of network energy-saving solutions, such as carrier shutdown, to dynamically manage network resources. Traditional optimization approaches encounter complexity due to factors like the large number of cells, stochastic traffic, channel variations, and intricate trade-offs. This paper introduces the simulated reality of communication networks (SRCON) framework, a novel, data-driven modeling paradigm that harnesses live network data and employs a blend of machine learning (ML)- and expert-based models. These mix of models accurately characterizes the functioning of network components, and predicts network energy efficiency and user equipment (UE) quality of service for any energy carrier shutdown configuration in a specific network. Distinguishing itself from existing methods, SRCON eliminates the reliance on expensive expert knowledge, drive testing, or incomplete maps for predicting network performance. This paper details the pipeline employed by SRCON to decompose the large network energy efficiency modeling problem into ML- and expert-based submodels. It demonstrates how, by embracing stochasticity, and carefully crafting the relationship between such submodels, the overall computational complexity can be reduced and prediction accuracy enhanced. Results derived from real network data underscore the paradigm shift introduced by SRCON, showcasing significant gains over a state-of-the-art method used by a operator for network energy efficiency modeling. The reliability of this local, data-driven modeling of the network proves to be a key asset for network energy-saving optimization.
移动网络的能耗是一个严峻的挑战。为缓解这一问题,有必要部署和优化网络节能解决方案,如载波关闭,以动态管理网络资源。由于存在大量小区、随机流量、信道变化和复杂的权衡等因素,传统的优化方法非常复杂。本文介绍了通信网络模拟现实(SRCON)框架,这是一种新颖的数据驱动建模范例,它利用实时网络数据,并融合了基于机器学习(ML)和专家的模型。这些混合模型能准确描述网络组件的功能,并预测特定网络中任何能源载体关断配置的网络能效和用户设备(UE)的服务质量。有别于现有方法,SRCON 无需依赖昂贵的专家知识、驱动测试或不完整的地图来预测网络性能。本文详细介绍了 SRCON 将大型网络能效建模问题分解为基于 ML 和专家的子模型的过程。它展示了如何通过采用随机性和精心设计这些子模型之间的关系来降低总体计算复杂度和提高预测准确性。从真实网络数据中得出的结果突显了 SRCON 带来的模式转变,与一家运营商用于网络能效建模的最先进方法相比,SRCON 取得了显著的进步。事实证明,这种由数据驱动的本地网络建模的可靠性是网络节能优化的关键资产。
{"title":"Data-Driven Energy Efficiency Modeling in Large-Scale Networks: An Expert Knowledge and ML-Based Approach","authors":"David López-Pérez;Antonio De Domenico;Nicola Piovesan;Mérouane Debbah","doi":"10.1109/TMLCN.2024.3407691","DOIUrl":"https://doi.org/10.1109/TMLCN.2024.3407691","url":null,"abstract":"The energy consumption of mobile networks poses a critical challenge. Mitigating this concern necessitates the deployment and optimization of network energy-saving solutions, such as carrier shutdown, to dynamically manage network resources. Traditional optimization approaches encounter complexity due to factors like the large number of cells, stochastic traffic, channel variations, and intricate trade-offs. This paper introduces the simulated reality of communication networks (SRCON) framework, a novel, data-driven modeling paradigm that harnesses live network data and employs a blend of machine learning (ML)- and expert-based models. These mix of models accurately characterizes the functioning of network components, and predicts network energy efficiency and user equipment (UE) quality of service for any energy carrier shutdown configuration in a specific network. Distinguishing itself from existing methods, SRCON eliminates the reliance on expensive expert knowledge, drive testing, or incomplete maps for predicting network performance. This paper details the pipeline employed by SRCON to decompose the large network energy efficiency modeling problem into ML- and expert-based submodels. It demonstrates how, by embracing stochasticity, and carefully crafting the relationship between such submodels, the overall computational complexity can be reduced and prediction accuracy enhanced. Results derived from real network data underscore the paradigm shift introduced by SRCON, showcasing significant gains over a state-of-the-art method used by a operator for network energy efficiency modeling. The reliability of this local, data-driven modeling of the network proves to be a key asset for network energy-saving optimization.","PeriodicalId":100641,"journal":{"name":"IEEE Transactions on Machine Learning in Communications and Networking","volume":"2 ","pages":"780-804"},"PeriodicalIF":0.0,"publicationDate":"2024-06-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10547043","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141429918","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Deep Ensemble Learning With Pruning for DDoS Attack Detection in IoT Networks 利用剪枝深度集合学习检测物联网网络中的 DDoS 攻击
Pub Date : 2024-04-30 DOI: 10.1109/TMLCN.2024.3395419
Makhduma F. Saiyedand;Irfan Al-Anbagi
The upsurge of Internet of Things (IoT) devices has increased their vulnerability to Distributed Denial of Service (DDoS) attacks. DDoS attacks have evolved into complex multi-vector threats that high-volume and low-volume attack strategies, posing challenges for detection using traditional methods. These challenges highlight the importance of reliable detection and prevention measures. This paper introduces a novel Deep Ensemble learning with Pruning (DEEPShield) system, to efficiently detect both high- and low-volume DDoS attacks in resource-constrained environments. The DEEPShield system uses ensemble learning by integrating a Convolutional Neural Network (CNN) and a Long Short-Term Memory (LSTM) network with a network traffic analysis system. This system analyzes and preprocesses network traffic while being data-agnostic, resulting in high detection accuracy. In addition, the DEEPShield system applies unit pruning to refine ensemble models, optimizing them for deployment on edge devices while maintaining a balance between accuracy and computational efficiency. To address the lack of a detailed dataset for high- and low-volume DDoS attacks, this paper also introduces a dataset named HL-IoT, which includes both attack types. Furthermore, the testbed evaluation of the DEEPShield system under various load scenarios and network traffic loads showcases its effectiveness and robustness. Compared to the state-of-the-art deep ensembles and deep learning methods across various datasets, including HL-IoT, ToN-IoT, CICIDS-17, and ISCX-12, the DEEPShield system consistently achieves an accuracy over 90% for both DDoS attack types. Furthermore, the DEEPShield system achieves this performance with reduced memory and processing requirements, underscoring its adaptability for edge computing scenarios.
物联网(IoT)设备的激增使其更容易受到分布式拒绝服务(DDoS)攻击。DDoS 攻击已演变成复杂的多载体威胁,采用大流量和小流量攻击策略,给使用传统方法进行检测带来了挑战。这些挑战凸显了可靠的检测和预防措施的重要性。本文介绍了一种新颖的带剪枝功能的深度集合学习(DEEPShield)系统,可在资源受限的环境中高效地检测大流量和小流量 DDoS 攻击。DEEPShield 系统通过将卷积神经网络(CNN)和长短期记忆(LSTM)网络与网络流量分析系统集成在一起,使用了集合学习技术。该系统可分析和预处理网络流量,同时不依赖数据,因此检测准确率很高。此外,DEEPShield 系统还应用单元剪枝来完善集合模型,在保持准确性和计算效率之间平衡的同时,优化这些模型,以便在边缘设备上部署。由于缺乏针对大流量和小流量 DDoS 攻击的详细数据集,本文还引入了一个名为 HL-IoT 的数据集,其中包括这两种攻击类型。此外,在各种负载场景和网络流量负载下对 DEEPShield 系统进行的测试平台评估展示了其有效性和鲁棒性。在各种数据集(包括 HL-IoT、ToN-IoT、CICIDS-17 和 ISCX-12)上,与最先进的深度集合和深度学习方法相比,DEEPShield 系统对两种 DDoS 攻击类型的准确率始终保持在 90% 以上。此外,DEEPShield 系统在降低内存和处理要求的情况下实现了这一性能,凸显了其对边缘计算场景的适应性。
{"title":"Deep Ensemble Learning With Pruning for DDoS Attack Detection in IoT Networks","authors":"Makhduma F. Saiyedand;Irfan Al-Anbagi","doi":"10.1109/TMLCN.2024.3395419","DOIUrl":"https://doi.org/10.1109/TMLCN.2024.3395419","url":null,"abstract":"The upsurge of Internet of Things (IoT) devices has increased their vulnerability to Distributed Denial of Service (DDoS) attacks. DDoS attacks have evolved into complex multi-vector threats that high-volume and low-volume attack strategies, posing challenges for detection using traditional methods. These challenges highlight the importance of reliable detection and prevention measures. This paper introduces a novel Deep Ensemble learning with Pruning (DEEPShield) system, to efficiently detect both high- and low-volume DDoS attacks in resource-constrained environments. The DEEPShield system uses ensemble learning by integrating a Convolutional Neural Network (CNN) and a Long Short-Term Memory (LSTM) network with a network traffic analysis system. This system analyzes and preprocesses network traffic while being data-agnostic, resulting in high detection accuracy. In addition, the DEEPShield system applies unit pruning to refine ensemble models, optimizing them for deployment on edge devices while maintaining a balance between accuracy and computational efficiency. To address the lack of a detailed dataset for high- and low-volume DDoS attacks, this paper also introduces a dataset named HL-IoT, which includes both attack types. Furthermore, the testbed evaluation of the DEEPShield system under various load scenarios and network traffic loads showcases its effectiveness and robustness. Compared to the state-of-the-art deep ensembles and deep learning methods across various datasets, including HL-IoT, ToN-IoT, CICIDS-17, and ISCX-12, the DEEPShield system consistently achieves an accuracy over 90% for both DDoS attack types. Furthermore, the DEEPShield system achieves this performance with reduced memory and processing requirements, underscoring its adaptability for edge computing scenarios.","PeriodicalId":100641,"journal":{"name":"IEEE Transactions on Machine Learning in Communications and Networking","volume":"2 ","pages":"596-616"},"PeriodicalIF":0.0,"publicationDate":"2024-04-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10513369","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140895040","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Federated Analytics With Data Augmentation in Domain Generalization Toward Future Networks 面向未来网络的领域泛化中的联合分析与数据增强
Pub Date : 2024-04-25 DOI: 10.1109/TMLCN.2024.3393892
Xunzheng Zhang;Juan Marcelo Parra-Ullauri;Shadi Moazzeni;Xenofon Vasilakos;Reza Nejabati;Dimitra Simeonidou
Federated Domain Generalization (FDG) aims to train a global model that generalizes well to new clients in a privacy-conscious manner, even when domain shifts are encountered. The increasing concerns of knowledge generalization and data privacy also challenge the traditional gather-and-analyze paradigm in networks. Recent investigations mainly focus on aggregation optimization and domain-invariant representations. However, without directly considering the data augmentation and leveraging the knowledge among existing domains, the domain-only data cannot guarantee the generalization ability of the FDG model when testing on the unseen domain. To overcome the problem, this paper proposes a distributed data augmentation method which combines Generative Adversarial Networks (GANs) and Federated Analytics (FA) to enhance the generalization ability of the trained FDG model, called FA-FDG. First, FA-FDG integrates GAN data generators from each Federated Learning (FL) client. Second, an evaluation index called generalization ability of domain (GAD) is proposed in the FA server. Then, the targeted data augmentation is implemented in each FL client with the GAD index and the integrated data generators. Extensive experiments on several data sets have shown the effectiveness of FA-FDG. Specifically, the accuracy of the FDG model improves up to 5.12% in classification problems, and the R-squared index of the FDG model advances up to 0.22 in the regression problem.
联合领域泛化(Federated Domain Generalization,FDG)旨在训练一个全局模型,即使在遇到领域转移时,该模型也能以注重隐私的方式很好地泛化到新客户。人们对知识泛化和数据隐私的关注与日俱增,这也对网络中传统的 "收集-分析 "模式提出了挑战。最近的研究主要集中在聚合优化和领域不变表示法上。然而,如果不直接考虑数据增强和利用现有领域间的知识,纯领域数据就无法保证 FDG 模型在未见领域进行测试时的泛化能力。为了克服这一问题,本文提出了一种结合生成对抗网络(GANs)和联合分析(FA)的分布式数据增强方法,以增强训练好的 FDG 模型的泛化能力,称为 FA-FDG。首先,FA-FDG 整合了每个联邦学习(FL)客户端的 GAN 数据生成器。其次,在 FA 服务器中提出了一个名为领域泛化能力(GAD)的评价指标。然后,利用 GAD 指数和集成的数据生成器,在每个 FL 客户端实施有针对性的数据增强。在多个数据集上进行的大量实验证明了 FA-FDG 的有效性。具体来说,在分类问题上,FDG 模型的准确率提高了 5.12%,在回归问题上,FDG 模型的 R 平方指数提高了 0.22。
{"title":"Federated Analytics With Data Augmentation in Domain Generalization Toward Future Networks","authors":"Xunzheng Zhang;Juan Marcelo Parra-Ullauri;Shadi Moazzeni;Xenofon Vasilakos;Reza Nejabati;Dimitra Simeonidou","doi":"10.1109/TMLCN.2024.3393892","DOIUrl":"https://doi.org/10.1109/TMLCN.2024.3393892","url":null,"abstract":"Federated Domain Generalization (FDG) aims to train a global model that generalizes well to new clients in a privacy-conscious manner, even when domain shifts are encountered. The increasing concerns of knowledge generalization and data privacy also challenge the traditional gather-and-analyze paradigm in networks. Recent investigations mainly focus on aggregation optimization and domain-invariant representations. However, without directly considering the data augmentation and leveraging the knowledge among existing domains, the domain-only data cannot guarantee the generalization ability of the FDG model when testing on the unseen domain. To overcome the problem, this paper proposes a distributed data augmentation method which combines Generative Adversarial Networks (GANs) and Federated Analytics (FA) to enhance the generalization ability of the trained FDG model, called FA-FDG. First, FA-FDG integrates GAN data generators from each Federated Learning (FL) client. Second, an evaluation index called generalization ability of domain (GAD) is proposed in the FA server. Then, the targeted data augmentation is implemented in each FL client with the GAD index and the integrated data generators. Extensive experiments on several data sets have shown the effectiveness of FA-FDG. Specifically, the accuracy of the FDG model improves up to 5.12% in classification problems, and the R-squared index of the FDG model advances up to 0.22 in the regression problem.","PeriodicalId":100641,"journal":{"name":"IEEE Transactions on Machine Learning in Communications and Networking","volume":"2 ","pages":"560-579"},"PeriodicalIF":0.0,"publicationDate":"2024-04-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10508396","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140818800","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multi-Agent Double Deep Q-Learning for Fairness in Multiple-Access Underlay Cognitive Radio Networks 多代理双深度 Q 学习促进多重接入下层认知无线电网络的公平性
Pub Date : 2024-04-18 DOI: 10.1109/TMLCN.2024.3391216
Zain Ali;Zouheir Rezki;Hamid Sadjadpour
Underlay Cognitive Radio (CR) systems were introduced to resolve the issue of spectrum scarcity in wireless communication. In CR systems, an unlicensed Secondary Transmitter (ST) shares the channel with a licensed Primary Transmitter (PT). Spectral efficiency of the CR systems can be further increased if multiple STs share the same channel. In underlay CR systems, the STs are required to keep interference at a low level to avoid outage at the primary system. The restriction on interference in underlay CR prevents some STs from transmitting while other STs may achieve high data rates, thus making the underlay CR network unfair. In this work, we consider the problem of achieving fairness in the rates of the STs. The considered optimization problem is non-convex in nature. The conventional iteration-based optimizers are time-consuming and may not converge when the considered problem is non-convex. To deal with the problem, we propose a deep-Q reinforcement learning (DQ-RL) framework that employs two separate deep neural networks for the computation and estimation of the Q-values which provides a fast solution and is robust to channel dynamic. The proposed technique achieves near optimal values of fairness while offering primary outage probability of less than 4%. Further, increasing the number of STs results in a linear increase in the computational complexity of the proposed framework. A comparison of several variants of the proposed scheme with the optimal solution is also presented. Finally, we present a novel cumulative reward framework and discuss how the combined-reward approach improves the performance of the communication system.
底层认知无线电(CR)系统的出现是为了解决无线通信中频谱稀缺的问题。在认知无线电系统中,未获得许可的二级发射机(ST)与获得许可的一级发射机(PT)共享信道。如果多个 ST 共享同一信道,则可进一步提高 CR 系统的频谱效率。在下层 CR 系统中,ST 必须将干扰控制在较低水平,以避免主系统中断。在底层 CR 中,对干扰的限制使一些 ST 无法进行传输,而其他 ST 则可能实现很高的数据传输速率,从而使底层 CR 网络变得不公平。在这项工作中,我们考虑了如何实现 ST 速率公平性的问题。所考虑的优化问题在本质上是非凸的。传统的基于迭代的优化器非常耗时,而且当所考虑的问题是非凸问题时可能无法收敛。为解决这一问题,我们提出了一种深度 Q 强化学习(DQ-RL)框架,该框架采用两个独立的深度神经网络来计算和估计 Q 值,可提供快速解决方案,并对信道动态具有鲁棒性。所提出的技术可实现接近最优的公平值,同时提供小于 4% 的主中断概率。此外,增加 ST 的数量会导致拟议框架的计算复杂度线性增加。此外,我们还对所提方案的几种变体与最优解决方案进行了比较。最后,我们提出了一个新颖的累积奖励框架,并讨论了综合奖励方法如何提高通信系统的性能。
{"title":"Multi-Agent Double Deep Q-Learning for Fairness in Multiple-Access Underlay Cognitive Radio Networks","authors":"Zain Ali;Zouheir Rezki;Hamid Sadjadpour","doi":"10.1109/TMLCN.2024.3391216","DOIUrl":"https://doi.org/10.1109/TMLCN.2024.3391216","url":null,"abstract":"Underlay Cognitive Radio (CR) systems were introduced to resolve the issue of spectrum scarcity in wireless communication. In CR systems, an unlicensed Secondary Transmitter (ST) shares the channel with a licensed Primary Transmitter (PT). Spectral efficiency of the CR systems can be further increased if multiple STs share the same channel. In underlay CR systems, the STs are required to keep interference at a low level to avoid outage at the primary system. The restriction on interference in underlay CR prevents some STs from transmitting while other STs may achieve high data rates, thus making the underlay CR network unfair. In this work, we consider the problem of achieving fairness in the rates of the STs. The considered optimization problem is non-convex in nature. The conventional iteration-based optimizers are time-consuming and may not converge when the considered problem is non-convex. To deal with the problem, we propose a deep-Q reinforcement learning (DQ-RL) framework that employs two separate deep neural networks for the computation and estimation of the Q-values which provides a fast solution and is robust to channel dynamic. The proposed technique achieves near optimal values of fairness while offering primary outage probability of less than 4%. Further, increasing the number of STs results in a linear increase in the computational complexity of the proposed framework. A comparison of several variants of the proposed scheme with the optimal solution is also presented. Finally, we present a novel cumulative reward framework and discuss how the combined-reward approach improves the performance of the communication system.","PeriodicalId":100641,"journal":{"name":"IEEE Transactions on Machine Learning in Communications and Networking","volume":"2 ","pages":"580-595"},"PeriodicalIF":0.0,"publicationDate":"2024-04-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10504881","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140820363","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Online Learning to Cache and Recommend in the Next Generation Cellular Networks 通过在线学习在下一代蜂窝网络中进行缓存和推荐
Pub Date : 2024-04-17 DOI: 10.1109/TMLCN.2024.3388975
Krishnendu S Tharakan;B. N. Bharath;Vimal Bhatia
An efficient caching can be achieved by predicting the popularity of the files accurately. It is well known that the popularity of a file can be nudged by using recommendation, and hence it can be estimated accurately leading to an efficient caching strategy. Motivated by this, in this paper, we consider the problem of joint caching and recommendation in a 5G and beyond heterogeneous network. We model the influence of recommendation on demands by a Probability Transition Matrix (PTM). The proposed framework consists of estimating the PTM and use them to jointly recommend and cache the files. In particular, this paper considers two estimation methods namely a) Bayesian estimation and b) a genie aided Point estimation. An approximate high probability bound on the regret of both the estimation methods are provided. Using this result, we show that the approximate regret achieved by the genie aided Point estimation approach is $mathcal {O}(T^{2/3} sqrt {log T})$ while the Bayesian estimation method achieves a much better scaling of $mathcal {O}(sqrt {T})$ . These results are extended to a heterogeneous network consisting of M small base stations (SBSs) with a central macro base station. The estimates are available at multiple SBSs, and are combined using appropriate weights. Insights on the choice of these weights are provided by using the derived approximate regret bound in the multiple SBS case. Finally, simulation results confirm the superiority of the proposed algorithms in terms of average cache hit rate, delay and throughput.
准确预测文件的受欢迎程度可以实现高效缓存。众所周知,文件的受欢迎程度可以通过推荐来推测,因此可以准确地估计文件的受欢迎程度,从而制定高效的缓存策略。受此启发,我们在本文中考虑了在 5G 及以上异构网络中联合缓存和推荐的问题。我们通过概率转换矩阵(PTM)来模拟推荐对需求的影响。所提出的框架包括估计 PTM,并利用它们来联合推荐和缓存文件。本文特别考虑了两种估算方法,即 a) 贝叶斯估算和 b) 精灵辅助点估算。本文提供了两种估算方法遗憾值的近似高概率约束。利用这一结果,我们表明精灵辅助点估计方法实现的近似遗憾值为 $mathcal {O}(T^{2/3} sqrt {log T})$,而贝叶斯估计方法实现了更好的缩放,为 $mathcal {O}(sqrt {T})$ 。这些结果被扩展到由 M 个小型基站 (SBS) 和一个中央宏基站组成的异构网络。多个 SBS 均可提供估计值,并使用适当的权重进行组合。在多个 SBS 的情况下,通过使用推导出的近似遗憾约束,可以深入了解这些权重的选择。最后,模拟结果证实了所提算法在平均缓存命中率、延迟和吞吐量方面的优越性。
{"title":"Online Learning to Cache and Recommend in the Next Generation Cellular Networks","authors":"Krishnendu S Tharakan;B. N. Bharath;Vimal Bhatia","doi":"10.1109/TMLCN.2024.3388975","DOIUrl":"https://doi.org/10.1109/TMLCN.2024.3388975","url":null,"abstract":"An efficient caching can be achieved by predicting the popularity of the files accurately. It is well known that the popularity of a file can be nudged by using recommendation, and hence it can be estimated accurately leading to an efficient caching strategy. Motivated by this, in this paper, we consider the problem of joint caching and recommendation in a 5G and beyond heterogeneous network. We model the influence of recommendation on demands by a Probability Transition Matrix (PTM). The proposed framework consists of estimating the PTM and use them to jointly recommend and cache the files. In particular, this paper considers two estimation methods namely a) \u0000<monospace>Bayesian estimation</monospace>\u0000 and b) a genie aided \u0000<monospace>Point estimation</monospace>\u0000. An approximate high probability bound on the regret of both the estimation methods are provided. Using this result, we show that the approximate regret achieved by the genie aided \u0000<monospace>Point estimation</monospace>\u0000 approach is \u0000<inline-formula> <tex-math>$mathcal {O}(T^{2/3} sqrt {log T})$ </tex-math></inline-formula>\u0000 while the \u0000<monospace>Bayesian estimation</monospace>\u0000 method achieves a much better scaling of \u0000<inline-formula> <tex-math>$mathcal {O}(sqrt {T})$ </tex-math></inline-formula>\u0000. These results are extended to a heterogeneous network consisting of M small base stations (SBSs) with a central macro base station. The estimates are available at multiple SBSs, and are combined using appropriate weights. Insights on the choice of these weights are provided by using the derived approximate regret bound in the multiple SBS case. Finally, simulation results confirm the superiority of the proposed algorithms in terms of average cache hit rate, delay and throughput.","PeriodicalId":100641,"journal":{"name":"IEEE Transactions on Machine Learning in Communications and Networking","volume":"2 ","pages":"511-525"},"PeriodicalIF":0.0,"publicationDate":"2024-04-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10504600","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140639381","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
IEEE Transactions on Machine Learning in Communications and Networking
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1