首页 > 最新文献

IEEE Transactions on Machine Learning in Communications and Networking最新文献

英文 中文
Optimal Access Point Centric Clustering for Cell-Free Massive MIMO Using Gaussian Mixture Model Clustering 利用高斯混杂模型聚类实现无小区大规模多输入多输出(MIMO)的最佳接入点中心聚类
Pub Date : 2024-03-21 DOI: 10.1109/TMLCN.2024.3403789
Pialy Biswas;Ranjan K. Mallik;Khaled B. Letaief
This paper proposes a Gaussian mixture model (GMM) based access point (AP) clustering technique in cell-free massive MIMO (CFMM) communication systems. The APs are first clustered on the basis of large-scale fading coefficients, and the users are assigned to each cluster depending on the channel gain. As the number of clusters increases, there is a degradation in the overall data rate of the system, causing a trade-off between the cluster number and average rate per user. To address this problem, we present an optimization problem that optimizes both the upper bound on the average downlink rate per user and the number of clusters. The optimal number of clusters is intuitively determined by solving the optimization problem, and then grouping the APs and users. As a result, the computation expense is much lower than the current techniques, since the existing methods require evaluations of the network performance in multiple iterations to find the optimal number of clusters. In addition, we analyze the performance of both balanced and unbalanced clustering. Numerical results will indicate that the unbalanced clustering yields a superior rate per user while maintaining a lower level of complexity compared to the balanced one. Furthermore, we investigate the statistical analysis of the spectral efficiency (SE) per user in the clustered CFMM. The findings reveal that the SE per user can be approximated by the logistic distribution.
本文在无小区大规模多输入多输出(CFMM)通信系统中提出了一种基于高斯混合模型(GMM)的接入点(AP)聚类技术。首先根据大规模衰减系数对接入点进行聚类,然后根据信道增益将用户分配到每个聚类中。随着簇数的增加,系统的整体数据传输速率会下降,从而导致簇数和每个用户平均速率之间的权衡。为了解决这个问题,我们提出了一个优化问题,既能优化每个用户的平均下行链路速率上限,又能优化簇的数量。通过求解优化问题,然后对接入点和用户进行分组,就能直观地确定最佳簇数。因此,与现有技术相比,计算费用要低得多,因为现有方法需要通过多次迭代来评估网络性能,从而找到最佳簇数。此外,我们还分析了平衡聚类和非平衡聚类的性能。数值结果表明,与平衡聚类法相比,非平衡聚类法在保持较低复杂度的同时,还能获得更高的单位用户速率。此外,我们还对聚类 CFMM 中每个用户的频谱效率(SE)进行了统计分析。研究结果表明,每个用户的频谱效率可以用逻辑分布来近似。
{"title":"Optimal Access Point Centric Clustering for Cell-Free Massive MIMO Using Gaussian Mixture Model Clustering","authors":"Pialy Biswas;Ranjan K. Mallik;Khaled B. Letaief","doi":"10.1109/TMLCN.2024.3403789","DOIUrl":"https://doi.org/10.1109/TMLCN.2024.3403789","url":null,"abstract":"This paper proposes a Gaussian mixture model (GMM) based access point (AP) clustering technique in cell-free massive MIMO (CFMM) communication systems. The APs are first clustered on the basis of large-scale fading coefficients, and the users are assigned to each cluster depending on the channel gain. As the number of clusters increases, there is a degradation in the overall data rate of the system, causing a trade-off between the cluster number and average rate per user. To address this problem, we present an optimization problem that optimizes both the upper bound on the average downlink rate per user and the number of clusters. The optimal number of clusters is intuitively determined by solving the optimization problem, and then grouping the APs and users. As a result, the computation expense is much lower than the current techniques, since the existing methods require evaluations of the network performance in multiple iterations to find the optimal number of clusters. In addition, we analyze the performance of both balanced and unbalanced clustering. Numerical results will indicate that the unbalanced clustering yields a superior rate per user while maintaining a lower level of complexity compared to the balanced one. Furthermore, we investigate the statistical analysis of the spectral efficiency (SE) per user in the clustered CFMM. The findings reveal that the SE per user can be approximated by the logistic distribution.","PeriodicalId":100641,"journal":{"name":"IEEE Transactions on Machine Learning in Communications and Networking","volume":"2 ","pages":"675-687"},"PeriodicalIF":0.0,"publicationDate":"2024-03-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10535986","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141187291","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Deep Learning for Radio Resource Allocation Under DoS Attack 针对 DoS 攻击下无线电资源分配的深度学习
Pub Date : 2024-03-20 DOI: 10.1109/TMLCN.2024.3403513
Ke Wang;Wanchun Liu;Teng Joon Lim
In this paper, we focus on the problem of remote state estimation in wireless networked cyber-physical systems (CPS). Information from multiple sensors is transmitted to a central gateway over a wireless network with fewer channels than sensors. Channel and power allocation are performed jointly, in the presence of a denial of service (DoS) attack where one or more channels are jammed by an attacker transmitting spurious signals. The attack policy is unknown and the central gateway has the objective of minimizing state estimation error with maximum energy efficiency. The problem involves a combination of discrete and continuous action spaces. In addition, the state and action spaces have high dimensionality, and the channel states are not fully known to the defender. We propose an innovative model-free deep reinforcement learning (DRL) algorithm to address the problem. In addition, we develop a deep learning-based method with a novel deep neural network (DNN) structure for detecting changes in the attack policy post-training. The proposed online policy change detector accelerates the adaptation of the defender to a new attack policy and also saves computational resources compared to continuous training. In short, a complete system featuring a DRL-based defender that is trained initially and adapts continually to changes in attack policy has been developed. Our numerical results show that the proposed intelligent system can significantly enhance the resilience of the system to DoS attacks.
本文重点讨论无线网络网络物理系统(CPS)中的远程状态估计问题。来自多个传感器的信息通过无线网络传输到中央网关,其信道数量少于传感器数量。在受到拒绝服务(DoS)攻击时,一个或多个信道会受到攻击者发射的虚假信号的干扰,信道和功率分配将联合执行。攻击策略是未知的,中央网关的目标是以最大的能效最小化状态估计误差。该问题涉及离散和连续行动空间的组合。此外,状态和行动空间的维度都很高,而且防御者并不完全了解信道状态。我们提出了一种创新的无模型深度强化学习(DRL)算法来解决这个问题。此外,我们还开发了一种基于深度学习的方法,该方法采用新型深度神经网络(DNN)结构,用于检测训练后攻击策略的变化。所提出的在线策略变化检测器加快了防御者对新攻击策略的适应,与持续训练相比还节省了计算资源。总之,我们开发出了一个完整的系统,它具有基于 DRL 的防御器,该防御器经过初始训练,并能不断适应攻击策略的变化。我们的数值结果表明,所提出的智能系统能显著增强系统对 DoS 攻击的抵御能力。
{"title":"Deep Learning for Radio Resource Allocation Under DoS Attack","authors":"Ke Wang;Wanchun Liu;Teng Joon Lim","doi":"10.1109/TMLCN.2024.3403513","DOIUrl":"https://doi.org/10.1109/TMLCN.2024.3403513","url":null,"abstract":"In this paper, we focus on the problem of remote state estimation in wireless networked cyber-physical systems (CPS). Information from multiple sensors is transmitted to a central gateway over a wireless network with fewer channels than sensors. Channel and power allocation are performed jointly, in the presence of a denial of service (DoS) attack where one or more channels are jammed by an attacker transmitting spurious signals. The attack policy is unknown and the central gateway has the objective of minimizing state estimation error with maximum energy efficiency. The problem involves a combination of discrete and continuous action spaces. In addition, the state and action spaces have high dimensionality, and the channel states are not fully known to the defender. We propose an innovative model-free deep reinforcement learning (DRL) algorithm to address the problem. In addition, we develop a deep learning-based method with a novel deep neural network (DNN) structure for detecting changes in the attack policy post-training. The proposed online policy change detector accelerates the adaptation of the defender to a new attack policy and also saves computational resources compared to continuous training. In short, a complete system featuring a DRL-based defender that is trained initially and adapts continually to changes in attack policy has been developed. Our numerical results show that the proposed intelligent system can significantly enhance the resilience of the system to DoS attacks.","PeriodicalId":100641,"journal":{"name":"IEEE Transactions on Machine Learning in Communications and Networking","volume":"2 ","pages":"703-716"},"PeriodicalIF":0.0,"publicationDate":"2024-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10535299","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141245171","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Hierarchical ML Codebook Design for Extreme MIMO Beam Management 用于极端多输入多输出波束管理的分层 ML 编解码器设计
Pub Date : 2024-03-16 DOI: 10.1109/TMLCN.2024.3402178
Ryan M. Dreifuerst;Robert W. Heath
Beam management is a strategy to unify beamforming and channel state information (CSI) acquisition with large antenna arrays in 5G. Codebooks serve multiple uses in beam management including beamforming reference signals, CSI reporting, and analog beam training. In this paper, we propose and evaluate a machine learning-refined codebook design process for extremely large multiple-input multiple-output (X-MIMO) systems. We propose a neural network and beam selection strategy to design the initial access and refinement codebooks using end-to-end learning from beamspace representations. The algorithm, called Extreme-Beam Management ( $text {X-BM}$ ), can significantly improve the performance of extremely large arrays as envisioned for 6G and capture realistic wireless and physical layer aspects. Our results show an 8dB improvement in initial access and overall effective spectral efficiency improvements compared to traditional codebook methods.
波束管理是在 5G 中利用大型天线阵列统一波束成形和信道状态信息(CSI)采集的一种策略。编码本在波束管理中具有多种用途,包括波束成形参考信号、CSI 报告和模拟波束训练。在本文中,我们为超大型多输入多输出(X-MIMO)系统提出并评估了一种机器学习提炼的编码本设计流程。我们提出了一种神经网络和波束选择策略,利用波束空间表征的端到端学习来设计初始接入和细化码本。该算法被称为 "极端波束管理"(Extreme-Beam Management),可以显著提高超大型阵列的性能(如 6G 的设想),并捕捉现实的无线和物理层方面。我们的研究结果表明,与传统的编码本方法相比,初始接入和整体有效频谱效率提高了 8dB。
{"title":"Hierarchical ML Codebook Design for Extreme MIMO Beam Management","authors":"Ryan M. Dreifuerst;Robert W. Heath","doi":"10.1109/TMLCN.2024.3402178","DOIUrl":"https://doi.org/10.1109/TMLCN.2024.3402178","url":null,"abstract":"Beam management is a strategy to unify beamforming and channel state information (CSI) acquisition with large antenna arrays in 5G. Codebooks serve multiple uses in beam management including beamforming reference signals, CSI reporting, and analog beam training. In this paper, we propose and evaluate a machine learning-refined codebook design process for extremely large multiple-input multiple-output (X-MIMO) systems. We propose a neural network and beam selection strategy to design the initial access and refinement codebooks using end-to-end learning from beamspace representations. The algorithm, called Extreme-Beam Management (\u0000<inline-formula> <tex-math>$text {X-BM}$ </tex-math></inline-formula>\u0000), can significantly improve the performance of extremely large arrays as envisioned for 6G and capture realistic wireless and physical layer aspects. Our results show an 8dB improvement in initial access and overall effective spectral efficiency improvements compared to traditional codebook methods.","PeriodicalId":100641,"journal":{"name":"IEEE Transactions on Machine Learning in Communications and Networking","volume":"2 ","pages":"688-702"},"PeriodicalIF":0.0,"publicationDate":"2024-03-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10533211","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141245179","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
DIN: A Decentralized Inexact Newton Algorithm for Consensus Optimization DIN:用于共识优化的去中心化不精确牛顿算法
Pub Date : 2024-03-16 DOI: 10.1109/TMLCN.2024.3400756
Abdulmomen Ghalkha;Chaouki Ben Issaid;Anis Elgabli;Mehdi Bennis
This paper tackles a challenging decentralized consensus optimization problem defined over a network of interconnected devices. The devices work collaboratively to solve a problem using only their local data and exchanging information with their immediate neighbors. One approach to solving such a problem is to use Newton-type methods, which are known for their fast convergence. However, these methods have a significant drawback as they require transmitting Hessian information between devices. This not only makes them communication-inefficient but also raises privacy concerns. To address these issues, we present a novel approach that transforms the Newton direction learning problem into a formulation composed of a sum of separable functions subjected to a consensus constraint and learns an inexact Newton direction alongside the global model without enforcing devices to share their computed Hessians using the proximal primal-dual (Prox-PDA) algorithm. Our algorithm, coined DIN, avoids sharing Hessian information between devices since each device shares a model-sized vector, concealing the first- and second-order information, reducing the network’s burden and improving both communication and energy efficiencies. Furthermore, we prove that DIN descent direction converges linearly to the optimal Newton direction. Numerical simulations corroborate that DIN exhibits higher communication efficiency in terms of communication rounds while consuming less communication and computation energy compared to existing second-order decentralized baselines.
本文探讨了一个具有挑战性的分散式共识优化问题,该问题是在一个由相互连接的设备组成的网络上定义的。这些设备协同工作,仅使用其本地数据并与其近邻交换信息来解决问题。解决此类问题的一种方法是使用牛顿式方法,这种方法以收敛速度快而著称。然而,这些方法有一个明显的缺点,即它们需要在设备之间传输 Hessian 信息。这不仅降低了通信效率,还引发了隐私问题。为了解决这些问题,我们提出了一种新方法,它将牛顿方向学习问题转化为一个由受共识约束的可分离函数之和组成的表述,并使用近似基元-二元(Prox-PDA)算法,在全局模型旁学习一个不精确的牛顿方向,而不强制设备共享其计算的 Hessians。我们的算法被称为 DIN 算法,由于每个设备共享一个模型大小的向量,因此避免了设备间共享 Hessian 信息,从而隐藏了一阶和二阶信息,减轻了网络负担,提高了通信和能效。此外,我们还证明 DIN 下降方向线性收敛于最优牛顿方向。数值模拟证实,与现有的二阶分散基线相比,DIN 在通信轮数方面表现出更高的通信效率,同时消耗更少的通信和计算能量。
{"title":"DIN: A Decentralized Inexact Newton Algorithm for Consensus Optimization","authors":"Abdulmomen Ghalkha;Chaouki Ben Issaid;Anis Elgabli;Mehdi Bennis","doi":"10.1109/TMLCN.2024.3400756","DOIUrl":"https://doi.org/10.1109/TMLCN.2024.3400756","url":null,"abstract":"This paper tackles a challenging decentralized consensus optimization problem defined over a network of interconnected devices. The devices work collaboratively to solve a problem using only their local data and exchanging information with their immediate neighbors. One approach to solving such a problem is to use Newton-type methods, which are known for their fast convergence. However, these methods have a significant drawback as they require transmitting Hessian information between devices. This not only makes them communication-inefficient but also raises privacy concerns. To address these issues, we present a novel approach that transforms the Newton direction learning problem into a formulation composed of a sum of separable functions subjected to a consensus constraint and learns an inexact Newton direction alongside the global model without enforcing devices to share their computed Hessians using the proximal primal-dual (Prox-PDA) algorithm. Our algorithm, coined DIN, avoids sharing Hessian information between devices since each device shares a model-sized vector, concealing the first- and second-order information, reducing the network’s burden and improving both communication and energy efficiencies. Furthermore, we prove that DIN descent direction converges linearly to the optimal Newton direction. Numerical simulations corroborate that DIN exhibits higher communication efficiency in terms of communication rounds while consuming less communication and computation energy compared to existing second-order decentralized baselines.","PeriodicalId":100641,"journal":{"name":"IEEE Transactions on Machine Learning in Communications and Networking","volume":"2 ","pages":"663-674"},"PeriodicalIF":0.0,"publicationDate":"2024-03-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10531222","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141181872","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Improving Generalization of ML-Based IDS With Lifecycle-Based Dataset, Auto-Learning Features, and Deep Learning 利用基于生命周期的数据集、自动学习特征和深度学习提高基于 ML 的 IDS 的通用性
Pub Date : 2024-03-16 DOI: 10.1109/TMLCN.2024.3402158
Didik Sudyana;Ying-Dar Lin;Miel Verkerken;Ren-Hung Hwang;Yuan-Cheng Lai;Laurens D’Hooge;Tim Wauters;Bruno Volckaert;Filip De Turck
During the past 10 years, researchers have extensively explored the use of machine learning (ML) in enhancing network intrusion detection systems (IDS). While many studies focused on improving accuracy of ML-based IDS, true effectiveness lies in robust generalization: the ability to classify unseen data accurately. Many existing models train and test on the same dataset, failing to represent the real unseen scenarios. Others who train and test using different datasets often struggle to generalize effectively. This study emphasizes the improvement of generalization through a novel composite approach involving the use of a lifecycle-based dataset (characterizing the attack as sequences of techniques), automatic feature learning (auto-learning), and a CNN-based deep learning model. The established model is tested on five public datasets to assess its generalization performance. The proposed approach demonstrates outstanding generalization performance, achieving an average F1 score of 0.85 and a recall of 0.94. This significantly outperforms the 0.56 and 0.42 averages recall achieved by attack-based datasets using CIC-IDS-2017 and CIC-IDS-2018 as training data, respectively. Furthermore, auto-learning features boost the F1 score by 0.2 compared to traditional statistical features. Overall, the efforts have resulted in significant advancements in model generalization, offering a more robust strategy for addressing intrusion detection challenges.
在过去 10 年中,研究人员广泛探索了如何利用机器学习(ML)来增强网络入侵检测系统(IDS)。虽然许多研究侧重于提高基于 ML 的 IDS 的准确性,但真正的有效性在于强大的泛化能力:对未见数据进行准确分类的能力。许多现有模型在相同的数据集上进行训练和测试,无法代表真实的未知场景。其他使用不同数据集进行训练和测试的模型往往难以有效地泛化。本研究强调通过一种新颖的复合方法来提高泛化能力,这种方法涉及使用基于生命周期的数据集(将攻击表征为技术序列)、自动特征学习(自动学习)和基于 CNN 的深度学习模型。已建立的模型在五个公共数据集上进行了测试,以评估其泛化性能。所提出的方法展示了出色的泛化性能,平均 F1 得分为 0.85,召回率为 0.94。这明显优于使用 CIC-IDS-2017 和 CIC-IDS-2018 作为训练数据的基于攻击的数据集分别取得的 0.56 和 0.42 的平均召回率。此外,与传统统计特征相比,自动学习特征将 F1 分数提高了 0.2。总体而言,这些努力在模型泛化方面取得了显著进步,为应对入侵检测挑战提供了更稳健的策略。
{"title":"Improving Generalization of ML-Based IDS With Lifecycle-Based Dataset, Auto-Learning Features, and Deep Learning","authors":"Didik Sudyana;Ying-Dar Lin;Miel Verkerken;Ren-Hung Hwang;Yuan-Cheng Lai;Laurens D’Hooge;Tim Wauters;Bruno Volckaert;Filip De Turck","doi":"10.1109/TMLCN.2024.3402158","DOIUrl":"https://doi.org/10.1109/TMLCN.2024.3402158","url":null,"abstract":"During the past 10 years, researchers have extensively explored the use of machine learning (ML) in enhancing network intrusion detection systems (IDS). While many studies focused on improving accuracy of ML-based IDS, true effectiveness lies in robust generalization: the ability to classify unseen data accurately. Many existing models train and test on the same dataset, failing to represent the real unseen scenarios. Others who train and test using different datasets often struggle to generalize effectively. This study emphasizes the improvement of generalization through a novel composite approach involving the use of a lifecycle-based dataset (characterizing the attack as sequences of techniques), automatic feature learning (auto-learning), and a CNN-based deep learning model. The established model is tested on five public datasets to assess its generalization performance. The proposed approach demonstrates outstanding generalization performance, achieving an average F1 score of 0.85 and a recall of 0.94. This significantly outperforms the 0.56 and 0.42 averages recall achieved by attack-based datasets using CIC-IDS-2017 and CIC-IDS-2018 as training data, respectively. Furthermore, auto-learning features boost the F1 score by 0.2 compared to traditional statistical features. Overall, the efforts have resulted in significant advancements in model generalization, offering a more robust strategy for addressing intrusion detection challenges.","PeriodicalId":100641,"journal":{"name":"IEEE Transactions on Machine Learning in Communications and Networking","volume":"2 ","pages":"645-662"},"PeriodicalIF":0.0,"publicationDate":"2024-03-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10531223","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141084910","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
SICNN: Soft Interference Cancellation Inspired Neural Network Equalizers SICNN:软干扰消除神经网络均衡器
Pub Date : 2024-03-13 DOI: 10.1109/TMLCN.2024.3377174
Stefan Baumgartner;Oliver Lang;Mario Huemer
In recent years data-driven machine learning approaches have been extensively studied to replace or enhance traditionally model-based processing in digital communication systems. In this work, we focus on equalization and propose a novel neural network (NN-)based approach, referred to as SICNN. SICNN is designed by deep unfolding a model-based iterative soft interference cancellation (SIC) method. It eliminates the main disadvantages of its model-based counterpart, which suffers from high computational complexity and performance degradation due to required approximations. We present different variants of SICNN. SICNNv1 is specifically tailored to single carrier frequency domain equalization (SC-FDE) systems, the communication system mainly regarded in this work. SICNNv2 is more universal and is applicable as an equalizer in any communication system with a block-based data transmission scheme. Moreover, for both SICNNv1 and SICNNv2, we present versions with highly reduced numbers of learnable parameters. Another contribution of this work is a novel approach for generating training datasets for NN-based equalizers, which significantly improves their performance at high signal-to-noise ratios. We compare the bit error ratio performance of the proposed NN-based equalizers with state-of-the-art model-based and NN-based approaches, highlighting the superiority of SICNNv1 over all other methods for SC-FDE. Exemplarily, to emphasize its universality, SICNNv2 is additionally applied to a unique word orthogonal frequency division multiplexing (UW-OFDM) system, where it achieves state-of-the-art performance. Furthermore, we present a thorough complexity analysis of the proposed NN-based equalization approaches, and we investigate the influence of the training set size on the performance of NN-based equalizers.
近年来,人们对数据驱动的机器学习方法进行了广泛研究,以取代或增强数字通信系统中传统的基于模型的处理方法。在这项工作中,我们重点关注均衡问题,并提出了一种基于神经网络(NN)的新方法,即 SICNN。SICNN 是通过深度展开基于模型的迭代软干扰消除(SIC)方法而设计的。它消除了基于模型的对应方法的主要缺点,即计算复杂度高和由于需要近似而导致性能下降。我们介绍了 SICNN 的不同变体。SICNNv1 专门针对单载波频域均衡(SC-FDE)系统(本研究主要考虑的通信系统)。SICNNv2 则更具通用性,可作为均衡器应用于任何采用基于块的数据传输方案的通信系统。此外,对于 SICNNv1 和 SICNNv2,我们都提出了可学习参数数量大大减少的版本。这项工作的另一个贡献是为基于 NN 的均衡器生成训练数据集的新方法,它显著提高了均衡器在高信噪比下的性能。我们将所提出的基于 NN 的均衡器的误码率性能与最先进的基于模型和基于 NN 的方法进行了比较,突出了 SICNNv1 在 SC-FDE 方面优于所有其他方法。此外,为了强调其通用性,我们还将 SICNNv2 应用于独特的字正交频分复用(UW-OFDM)系统,并在该系统中实现了最先进的性能。此外,我们还对所提出的基于 NN 的均衡方法进行了全面的复杂性分析,并研究了训练集大小对基于 NN 的均衡器性能的影响。
{"title":"SICNN: Soft Interference Cancellation Inspired Neural Network Equalizers","authors":"Stefan Baumgartner;Oliver Lang;Mario Huemer","doi":"10.1109/TMLCN.2024.3377174","DOIUrl":"https://doi.org/10.1109/TMLCN.2024.3377174","url":null,"abstract":"In recent years data-driven machine learning approaches have been extensively studied to replace or enhance traditionally model-based processing in digital communication systems. In this work, we focus on equalization and propose a novel neural network (NN-)based approach, referred to as SICNN. SICNN is designed by deep unfolding a model-based iterative soft interference cancellation (SIC) method. It eliminates the main disadvantages of its model-based counterpart, which suffers from high computational complexity and performance degradation due to required approximations. We present different variants of SICNN. SICNNv1 is specifically tailored to single carrier frequency domain equalization (SC-FDE) systems, the communication system mainly regarded in this work. SICNNv2 is more universal and is applicable as an equalizer in any communication system with a block-based data transmission scheme. Moreover, for both SICNNv1 and SICNNv2, we present versions with highly reduced numbers of learnable parameters. Another contribution of this work is a novel approach for generating training datasets for NN-based equalizers, which significantly improves their performance at high signal-to-noise ratios. We compare the bit error ratio performance of the proposed NN-based equalizers with state-of-the-art model-based and NN-based approaches, highlighting the superiority of SICNNv1 over all other methods for SC-FDE. Exemplarily, to emphasize its universality, SICNNv2 is additionally applied to a unique word orthogonal frequency division multiplexing (UW-OFDM) system, where it achieves state-of-the-art performance. Furthermore, we present a thorough complexity analysis of the proposed NN-based equalization approaches, and we investigate the influence of the training set size on the performance of NN-based equalizers.","PeriodicalId":100641,"journal":{"name":"IEEE Transactions on Machine Learning in Communications and Networking","volume":"2 ","pages":"384-406"},"PeriodicalIF":0.0,"publicationDate":"2024-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10471626","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140297097","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Hierarchical DDPG Based Reinforcement Learning Framework for Multi-Agent Collective Motion With Short Communication Ranges 基于层次化 DDPG 的短通信距离多代理集体运动强化学习框架
Pub Date : 2024-03-13 DOI: 10.1109/TMLCN.2024.3400059
Jiaxin Li;Peng Yi;Tong Duan;Zhen Zhang;Tao Hu
Collective motion is an important research content in the multi-agent control field. However, existing multi-agent collective motion methods typically assume large communication ranges of individual agents; in the scenario of leader-follower control with short communication ranges, if the leader dynamically changes its velocity without considering the followers’ states, the communication topology may be easily disconnected, making multi-agent collective motion more challenging. In this work, a novel Hierarchical DeepDeterministic PolicyGradient (HDDPG) based reinforcement learning framework is proposed to realize multi-agent collective motion with short communication ranges, ensuring the communication topology connected as much as possible. In H-DDPG, multiple agents with one single leader and numerous followers are dynamically divided into several hierarchies to conduct distributed control when the leader’s velocity changes. Two algorithms based on DDPG and the hierarchical strategy are designed to train followers in the first layer and followers in layers other than the first layer separately, which ensures that the agents form a tight swarm from scattered distribution and all followers can track the leader effectively. The experimental results demonstrate that with short communication ranges, H-DDPG outperforms the hierarchical flocking method in keeping the communication topology connection and shaping a tighter swarm.
集体运动是多代理控制领域的一个重要研究内容。然而,现有的多代理集体运动方法通常假定单个代理的通信范围较大;在通信范围较短的领导者-跟随者控制场景中,如果领导者在不考虑跟随者状态的情况下动态改变其速度,通信拓扑可能很容易断开,这使得多代理集体运动更具挑战性。本研究提出了一种新颖的基于强化学习框架的分层深度确定性策略梯度(HDDPG),以实现短通信范围内的多代理集体运动,尽可能保证通信拓扑的连通性。在H-DDPG中,当领导者的速度发生变化时,由一个领导者和众多跟随者组成的多个代理被动态地划分为多个层次,以进行分布式控制。在 DDPG 和分层策略的基础上设计了两种算法,分别训练第一层的追随者和第一层以外各层的追随者,确保代理从分散分布形成紧密的蜂群,所有追随者都能有效跟踪领导者。实验结果表明,在通信距离较短的情况下,H-DDPG 在保持通信拓扑连接和形成更紧密的蜂群方面优于分层成群方法。
{"title":"Hierarchical DDPG Based Reinforcement Learning Framework for Multi-Agent Collective Motion With Short Communication Ranges","authors":"Jiaxin Li;Peng Yi;Tong Duan;Zhen Zhang;Tao Hu","doi":"10.1109/TMLCN.2024.3400059","DOIUrl":"https://doi.org/10.1109/TMLCN.2024.3400059","url":null,"abstract":"Collective motion is an important research content in the multi-agent control field. However, existing multi-agent collective motion methods typically assume large communication ranges of individual agents; in the scenario of leader-follower control with short communication ranges, if the leader dynamically changes its velocity without considering the followers’ states, the communication topology may be easily disconnected, making multi-agent collective motion more challenging. In this work, a novel Hierarchical DeepDeterministic PolicyGradient (HDDPG) based reinforcement learning framework is proposed to realize multi-agent collective motion with short communication ranges, ensuring the communication topology connected as much as possible. In H-DDPG, multiple agents with one single leader and numerous followers are dynamically divided into several hierarchies to conduct distributed control when the leader’s velocity changes. Two algorithms based on DDPG and the hierarchical strategy are designed to train followers in the first layer and followers in layers other than the first layer separately, which ensures that the agents form a tight swarm from scattered distribution and all followers can track the leader effectively. The experimental results demonstrate that with short communication ranges, H-DDPG outperforms the hierarchical flocking method in keeping the communication topology connection and shaping a tighter swarm.","PeriodicalId":100641,"journal":{"name":"IEEE Transactions on Machine Learning in Communications and Networking","volume":"2 ","pages":"633-644"},"PeriodicalIF":0.0,"publicationDate":"2024-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10529296","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141068966","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An IoMT-Based Incremental Learning Framework With a Novel Feature Selection Algorithm for Intelligent Diagnosis in Smart Healthcare 基于 IoMT 的增量学习框架与用于智能医疗诊断的新型特征选择算法
Pub Date : 2024-03-06 DOI: 10.1109/TMLCN.2024.3374253
Siva Sai;Kartikey Singh Bhandari;Aditya Nawal;Vinay Chamola;Biplab Sikdar
Several recent research papers in the Internet of Medical Things (IoMT) domain employ machine learning techniques to detect data patterns and trends, identify anomalies, predict and prevent adverse events, and develop personalized patient treatment plans. Despite the potential of machine learning techniques in IoMT to revolutionize healthcare, several challenges remain.The conventional machine learning models in the IoMT domain are static in that they were trained on some datasets and are being used for real-time inferencing data. This approach does not consider the patient’s recent health-related data. In the conventional machine learning models paradigm, the models must be re-trained again, even to incorporate a few sets of additional samples. Also, since the training of the conventional machine learning models generally happens on cloud platforms, there are also risks to security and privacy. Addressing these several issues, we propose an edge-based incremental learning framework with a novel feature selection algorithm for intelligent diagnosis of patients. The approach aims to improve the accuracy and efficiency of medical diagnosis by continuously learning from new patient data and adapting to patient conditions over time, along with reducing privacy and security issues. Addressing the issue of excessive features, which might increase the computational burden on incremental models, we propose a novel feature selection algorithm based on bijective soft sets, Shannon entropy, and TOPSIS(Technique for Order Preference by Similarity to Ideal Solution). We propose two incremental algorithms inspired by Aggregated Mondrian Forests and Half-Space Trees for classification and anomaly detection. The proposed model for classification gives an accuracy of 87.63%, which is better by 13.61% than the best-performing batch learning-based model. Similarly, the proposed model for anomaly detection gives an accuracy of 97.22%, which is better by 1.76% than the best-performing batch-based model. The proposed incremental algorithms for classification and anomaly detection are 9X and 16X faster than their corresponding best-performing batch learning-based models.
最近几篇关于医疗物联网(IoMT)领域的研究论文采用了机器学习技术来检测数据模式和趋势、识别异常、预测和预防不良事件,以及制定个性化的患者治疗方案。IoMT 领域的传统机器学习模型是静态的,因为它们是在某些数据集上训练出来的,并被用于实时推断数据。这种方法不考虑患者最近的健康相关数据。在传统的机器学习模型范例中,模型必须再次重新训练,甚至要加入几组额外的样本。此外,由于传统机器学习模型的训练一般在云平台上进行,因此还存在安全和隐私风险。针对这几个问题,我们提出了一种基于边缘的增量学习框架,并为患者的智能诊断提供了一种新颖的特征选择算法。该方法旨在通过不断从新的患者数据中学习,并随着时间的推移适应患者情况,从而提高医疗诊断的准确性和效率,同时减少隐私和安全问题。过多的特征可能会增加增量模型的计算负担,针对这一问题,我们提出了一种基于双射软集、香农熵和 TOPSIS(通过与理想解的相似性进行排序优选的技术)的新型特征选择算法。我们从聚合蒙德里安森林(Aggregated Mondrian Forests)和半空间树(Half-Space Trees)中汲取灵感,提出了两种增量算法,用于分类和异常检测。所提出的分类模型准确率为 87.63%,比基于批量学习的最佳模型高出 13.61%。同样,所提出的异常检测模型的准确率为 97.22%,比基于批量学习的最佳模型高出 1.76%。针对分类和异常检测提出的增量算法比相应的基于批量学习的最佳模型分别快 9 倍和 16 倍。
{"title":"An IoMT-Based Incremental Learning Framework With a Novel Feature Selection Algorithm for Intelligent Diagnosis in Smart Healthcare","authors":"Siva Sai;Kartikey Singh Bhandari;Aditya Nawal;Vinay Chamola;Biplab Sikdar","doi":"10.1109/TMLCN.2024.3374253","DOIUrl":"https://doi.org/10.1109/TMLCN.2024.3374253","url":null,"abstract":"Several recent research papers in the Internet of Medical Things (IoMT) domain employ machine learning techniques to detect data patterns and trends, identify anomalies, predict and prevent adverse events, and develop personalized patient treatment plans. Despite the potential of machine learning techniques in IoMT to revolutionize healthcare, several challenges remain.The conventional machine learning models in the IoMT domain are static in that they were trained on some datasets and are being used for real-time inferencing data. This approach does not consider the patient’s recent health-related data. In the conventional machine learning models paradigm, the models must be re-trained again, even to incorporate a few sets of additional samples. Also, since the training of the conventional machine learning models generally happens on cloud platforms, there are also risks to security and privacy. Addressing these several issues, we propose an edge-based incremental learning framework with a novel feature selection algorithm for intelligent diagnosis of patients. The approach aims to improve the accuracy and efficiency of medical diagnosis by continuously learning from new patient data and adapting to patient conditions over time, along with reducing privacy and security issues. Addressing the issue of excessive features, which might increase the computational burden on incremental models, we propose a novel feature selection algorithm based on bijective soft sets, Shannon entropy, and TOPSIS(Technique for Order Preference by Similarity to Ideal Solution). We propose two incremental algorithms inspired by Aggregated Mondrian Forests and Half-Space Trees for classification and anomaly detection. The proposed model for classification gives an accuracy of 87.63%, which is better by 13.61% than the best-performing batch learning-based model. Similarly, the proposed model for anomaly detection gives an accuracy of 97.22%, which is better by 1.76% than the best-performing batch-based model. The proposed incremental algorithms for classification and anomaly detection are 9X and 16X faster than their corresponding best-performing batch learning-based models.","PeriodicalId":100641,"journal":{"name":"IEEE Transactions on Machine Learning in Communications and Networking","volume":"2 ","pages":"370-383"},"PeriodicalIF":0.0,"publicationDate":"2024-03-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10461070","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140164171","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Energy-Efficient Trajectory Planning With Joint Device Selection and Power Splitting for mmWaves-Enabled UAV-NOMA Networks 利用联合设备选择和功率分配为毫米波无人机-NOMA 网络制定高能效轨迹规划
Pub Date : 2024-03-02 DOI: 10.1109/TMLCN.2024.3396438
Ahmad Gendia;Osamu Muta;Sherief Hashima;Kohei Hatano
This paper proposes two energy-efficient reinforcement learning (RL)-based algorithms for millimeter wave (mmWave)-enabled unmanned aerial vehicle (UAV) communications toward beyond-5G (B5G). This can be especially useful in ad-hoc communication scenarios within a neighborhood with main-network connectivity problems such as in areas affected by natural disasters. To improve the system’s overall sum-rate performance, the UAV-operated mobile base station (UAV-MBS) can harness non-orthogonal multiple access (NOMA) as an efficient protocol to grant ground devices access to fast downlink connections. Dynamic selection of suitable hovering spots within the target zone where the battery-constrained UAV needs to be positioned as well as calibrated NOMA power control with proper device pairing are critical for optimized performance. We propose cost-subsidized multiarmed bandit (CS-MAB) and double deep Q-network (DDQN)-based solutions to jointly address the problems of dynamic UAV path design, device pairing, and power splitting for downlink data transmission in NOMA-based systems. To verify that the proposed RL-based solutions support high sum-rates, numerical simulations are presented. In addition, exhaustive and random search benchmarks are provided as baselines for the achievable upper and lower sum-rate levels, respectively. The proposed DDQN agent achieves 96% of the sum-rate provided by the optimal exhaustive scanning whereas CS-MAB reaches 91.5%. By contrast, a conventional channel state sorting pairing (CSSP) solver achieves about 89.3%.
本文提出了两种基于强化学习(RL)的高能效算法,用于支持毫米波(mmWave)的无人机(UAV)通信,以实现超越 5G (B5G)。这对于存在主网络连接问题的邻近地区(如受自然灾害影响的地区)的临时通信场景尤其有用。为了提高系统的整体总速率性能,无人机移动基站(UAV-MBS)可以利用非正交多址接入(NOMA)作为一种高效协议,让地面设备接入快速下行链路连接。在电池有限的无人机需要定位的目标区域内动态选择合适的悬停点,以及校准的 NOMA 功率控制和适当的设备配对对于优化性能至关重要。我们提出了基于成本补贴的多臂匪盗(CS-MAB)和双深Q网络(DDQN)的解决方案,以共同解决无人机动态路径设计、设备配对以及基于NOMA系统的下行数据传输功率分配等问题。为了验证所提出的基于 RL 的解决方案是否支持高和速率,本文进行了数值模拟。此外,还提供了穷举搜索和随机搜索基准,分别作为可实现的上限和下限总和速率水平的基准。提议的 DDQN 代理达到了最优穷举扫描所提供的总和速率的 96%,而 CS-MAB 则达到了 91.5%。相比之下,传统的信道状态排序配对(CSSP)求解器可达到约 89.3%。
{"title":"Energy-Efficient Trajectory Planning With Joint Device Selection and Power Splitting for mmWaves-Enabled UAV-NOMA Networks","authors":"Ahmad Gendia;Osamu Muta;Sherief Hashima;Kohei Hatano","doi":"10.1109/TMLCN.2024.3396438","DOIUrl":"https://doi.org/10.1109/TMLCN.2024.3396438","url":null,"abstract":"This paper proposes two energy-efficient reinforcement learning (RL)-based algorithms for millimeter wave (mmWave)-enabled unmanned aerial vehicle (UAV) communications toward beyond-5G (B5G). This can be especially useful in ad-hoc communication scenarios within a neighborhood with main-network connectivity problems such as in areas affected by natural disasters. To improve the system’s overall sum-rate performance, the UAV-operated mobile base station (UAV-MBS) can harness non-orthogonal multiple access (NOMA) as an efficient protocol to grant ground devices access to fast downlink connections. Dynamic selection of suitable hovering spots within the target zone where the battery-constrained UAV needs to be positioned as well as calibrated NOMA power control with proper device pairing are critical for optimized performance. We propose cost-subsidized multiarmed bandit (CS-MAB) and double deep Q-network (DDQN)-based solutions to jointly address the problems of dynamic UAV path design, device pairing, and power splitting for downlink data transmission in NOMA-based systems. To verify that the proposed RL-based solutions support high sum-rates, numerical simulations are presented. In addition, exhaustive and random search benchmarks are provided as baselines for the achievable upper and lower sum-rate levels, respectively. The proposed DDQN agent achieves 96% of the sum-rate provided by the optimal exhaustive scanning whereas CS-MAB reaches 91.5%. By contrast, a conventional channel state sorting pairing (CSSP) solver achieves about 89.3%.","PeriodicalId":100641,"journal":{"name":"IEEE Transactions on Machine Learning in Communications and Networking","volume":"2 ","pages":"617-632"},"PeriodicalIF":0.0,"publicationDate":"2024-03-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10517756","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141068938","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Device-Based Cellular Throughput Prediction for Video Streaming: Lessons From a Real-World Evaluation 基于设备的视频流蜂窝吞吐量预测:来自真实世界评估的启示
Pub Date : 2024-03-01 DOI: 10.1109/TMLCN.2024.3352541
Darijo Raca;Ahmed H. Zahran;Cormac J. Sreenan;Rakesh K. Sinha;Emir Halepovic;Vijay Gopalakrishnan
AI-driven data analysis methods have garnered attention in enhancing the performance of wireless networks. One such application is the prediction of downlink throughput in mobile cellular networks. Accurate throughput predictions have demonstrated significant application benefits, such as improving the quality of experience in adaptive video streaming. However, the high degree of variability in cellular link behaviour, coupled with device mobility and diverse traffic demands, presents a complex problem. Numerous published studies have explored the application of machine learning to address this problem, displaying potential when trained and evaluated with traffic traces collected from operational networks. The focus of this paper is an empirical investigation of machine learning-based throughput prediction that runs in real-time on a smartphone, and its evaluation with video streaming in a range of real-world cellular network settings. We report on a number of key challenges that arise when performing prediction “in the wild”, dealing with practical issues one encounters with online data (not traces) and the limitations of real smartphones. These include data sampling, distribution shift, and data labelling. We describe our current solutions to these issues and quantify their efficacy, drawing lessons that we believe will be valuable to network practitioners planning to use such methodologies in operational cellular networks.
人工智能驱动的数据分析方法在提高无线网络性能方面备受关注。其中一项应用是预测移动蜂窝网络的下行链路吞吐量。准确的吞吐量预测已显示出显著的应用优势,如改善自适应视频流的体验质量。然而,蜂窝链路行为的高度可变性,加上设备的移动性和不同的流量需求,带来了一个复杂的问题。许多已发表的研究都探讨了如何应用机器学习来解决这一问题,在使用从运营网络中收集的流量跟踪进行训练和评估时,这些研究都显示出了潜力。本文的重点是对在智能手机上实时运行的基于机器学习的吞吐量预测进行实证调查,并对其在一系列真实蜂窝网络环境中的视频流进行评估。我们报告了在 "野外 "执行预测时遇到的一些关键挑战,这些挑战涉及在线数据(而非跟踪数据)遇到的实际问题以及真实智能手机的局限性。这些问题包括数据采样、分布偏移和数据标记。我们介绍了目前针对这些问题的解决方案,并量化了这些解决方案的功效,我们相信,这些经验对计划在运营蜂窝网络中使用此类方法的网络从业人员很有价值。
{"title":"Device-Based Cellular Throughput Prediction for Video Streaming: Lessons From a Real-World Evaluation","authors":"Darijo Raca;Ahmed H. Zahran;Cormac J. Sreenan;Rakesh K. Sinha;Emir Halepovic;Vijay Gopalakrishnan","doi":"10.1109/TMLCN.2024.3352541","DOIUrl":"https://doi.org/10.1109/TMLCN.2024.3352541","url":null,"abstract":"AI-driven data analysis methods have garnered attention in enhancing the performance of wireless networks. One such application is the prediction of downlink throughput in mobile cellular networks. Accurate throughput predictions have demonstrated significant application benefits, such as improving the quality of experience in adaptive video streaming. However, the high degree of variability in cellular link behaviour, coupled with device mobility and diverse traffic demands, presents a complex problem. Numerous published studies have explored the application of machine learning to address this problem, displaying potential when trained and evaluated with traffic traces collected from operational networks. The focus of this paper is an empirical investigation of machine learning-based throughput prediction that runs in real-time on a smartphone, and its evaluation with video streaming in a range of real-world cellular network settings. We report on a number of key challenges that arise when performing prediction “in the wild”, dealing with practical issues one encounters with online data (not traces) and the limitations of real smartphones. These include data sampling, distribution shift, and data labelling. We describe our current solutions to these issues and quantify their efficacy, drawing lessons that we believe will be valuable to network practitioners planning to use such methodologies in operational cellular networks.","PeriodicalId":100641,"journal":{"name":"IEEE Transactions on Machine Learning in Communications and Networking","volume":"2 ","pages":"318-334"},"PeriodicalIF":0.0,"publicationDate":"2024-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10457536","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140063514","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
IEEE Transactions on Machine Learning in Communications and Networking
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1