首页 > 最新文献

IEEE Transactions on Machine Learning in Communications and Networking最新文献

英文 中文
Cooperate or Not Cooperate: Transfer Learning With Multi-Armed Bandit for Spatial Reuse in Wi-Fi 合作与否:针对 Wi-Fi 中空间重用的多臂匪徒转移学习
Pub Date : 2024-02-29 DOI: 10.1109/TMLCN.2024.3371929
Pedro Enrique Iturria-Rivera;Marcel Chenier;Bernard Herscovici;Burak Kantarci;Melike Erol-Kantarci
The exponential increase in the demand for high-performance services such as streaming video and gaming by wireless devices has posed several challenges for Wireless Local Area Networks (WLANs). In the context of Wi-Fi, the newest standards, IEEE 802.11ax, and 802.11be, bring high data rates in dense user deployments. Additionally, they introduce new flexible features in the physical layer, such as dynamic Clear-Channel-Assessment (CCA) thresholds, to improve spatial reuse (SR) in response to radio spectrum scarcity in dense scenarios. In this paper, we formulate the Transmission Power (TP) and CCA configuration problem with the objective of maximizing fairness and minimizing station starvation. We present five main contributions to distributed SR optimization using Multi-Agent Multi-Armed Bandits (MA-MABs). First, we provide regret analysis for the distributed Multi-Agent Contextual MABs (MA-CMABs) proposed in this work. Second, we propose reducing the action space given the large cardinality of action combinations of TP and CCA threshold values per Access Point (AP). Third, we present two deep MA-CMAB algorithms, named Sample Average Uncertainty (SAU)-Coop and SAU-NonCoop, as cooperative and non-cooperative versions to improve SR. Additionally, we analyze the viability of using MA-MABs solutions based on the $epsilon $ -greedy, Upper Bound Confidence (UCB), and Thompson (TS) techniques. Finally, we propose a deep reinforcement transfer learning technique to improve adaptability in dynamic environments. Simulation results show that cooperation via the SAU-Coop algorithm leads to a 14.7% improvement in cumulative throughput and a 32.5% reduction in Packet Loss Rate (PLR) in comparison to non-cooperative approaches. Under dynamic scenarios, transfer learning mitigates service drops for at least 60% of the total users.
无线设备对流媒体视频和游戏等高性能服务的需求呈指数级增长,这给无线局域网(WLAN)带来了诸多挑战。就 Wi-Fi 而言,最新标准 IEEE 802.11ax 和 802.11be 为密集用户部署带来了高数据传输速率。此外,它们还在物理层引入了新的灵活功能,如动态净信道评估(CCA)阈值,以改善空间重用(SR),应对密集场景中的无线电频谱稀缺问题。在本文中,我们提出了传输功率(TP)和 CCA 配置问题,其目标是最大限度地提高公平性和最小化站点饥饿。我们提出了使用多代理多臂匪徒(MA-MABs)进行分布式 SR 优化的五大贡献。首先,我们为本研究中提出的分布式多代理上下文 MABs(MA-CMABs)提供了遗憾分析。其次,考虑到每个接入点(AP)的 TP 和 CCA 门限值的行动组合具有很大的卡方性,我们建议缩小行动空间。第三,我们提出了两种深度 MA-CMAB 算法,分别命名为样本平均不确定性 (SAU)-Coop 和 SAU-NonCoop,作为改进 SR 的合作和非合作版本。此外,我们还分析了基于$epsilon$-greedy、Upper Bound Confidence(UCB)和Thompson(TS)技术的MA-MABs解决方案的可行性。最后,我们提出了一种深度强化迁移学习技术,以提高动态环境中的适应性。仿真结果表明,与非合作方法相比,通过 SAU-Coop 算法进行合作,累计吞吐量提高了 14.7%,丢包率降低了 32.5%。在动态场景下,转移学习至少能缓解总用户数 60% 的服务下降。
{"title":"Cooperate or Not Cooperate: Transfer Learning With Multi-Armed Bandit for Spatial Reuse in Wi-Fi","authors":"Pedro Enrique Iturria-Rivera;Marcel Chenier;Bernard Herscovici;Burak Kantarci;Melike Erol-Kantarci","doi":"10.1109/TMLCN.2024.3371929","DOIUrl":"https://doi.org/10.1109/TMLCN.2024.3371929","url":null,"abstract":"The exponential increase in the demand for high-performance services such as streaming video and gaming by wireless devices has posed several challenges for Wireless Local Area Networks (WLANs). In the context of Wi-Fi, the newest standards, IEEE 802.11ax, and 802.11be, bring high data rates in dense user deployments. Additionally, they introduce new flexible features in the physical layer, such as dynamic Clear-Channel-Assessment (CCA) thresholds, to improve spatial reuse (SR) in response to radio spectrum scarcity in dense scenarios. In this paper, we formulate the Transmission Power (TP) and CCA configuration problem with the objective of maximizing fairness and minimizing station starvation. We present five main contributions to distributed SR optimization using Multi-Agent Multi-Armed Bandits (MA-MABs). First, we provide regret analysis for the distributed Multi-Agent Contextual MABs (MA-CMABs) proposed in this work. Second, we propose reducing the action space given the large cardinality of action combinations of TP and CCA threshold values per Access Point (AP). Third, we present two deep MA-CMAB algorithms, named Sample Average Uncertainty (SAU)-Coop and SAU-NonCoop, as cooperative and non-cooperative versions to improve SR. Additionally, we analyze the viability of using MA-MABs solutions based on the \u0000<inline-formula> <tex-math>$epsilon $ </tex-math></inline-formula>\u0000-greedy, Upper Bound Confidence (UCB), and Thompson (TS) techniques. Finally, we propose a deep reinforcement transfer learning technique to improve adaptability in dynamic environments. Simulation results show that cooperation via the SAU-Coop algorithm leads to a 14.7% improvement in cumulative throughput and a 32.5% reduction in Packet Loss Rate (PLR) in comparison to non-cooperative approaches. Under dynamic scenarios, transfer learning mitigates service drops for at least 60% of the total users.","PeriodicalId":100641,"journal":{"name":"IEEE Transactions on Machine Learning in Communications and Networking","volume":"2 ","pages":"351-369"},"PeriodicalIF":0.0,"publicationDate":"2024-02-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10453622","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140104203","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
IEEE Communications Society Board of Governors 电气和电子工程师学会通信协会理事会
Pub Date : 2024-02-23 DOI: 10.1109/TMLCN.2024.3366609
{"title":"IEEE Communications Society Board of Governors","authors":"","doi":"10.1109/TMLCN.2024.3366609","DOIUrl":"https://doi.org/10.1109/TMLCN.2024.3366609","url":null,"abstract":"","PeriodicalId":100641,"journal":{"name":"IEEE Transactions on Machine Learning in Communications and Networking","volume":"2 ","pages":"C3-C3"},"PeriodicalIF":0.0,"publicationDate":"2024-02-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10443923","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139942648","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Outage Performance and Novel Loss Function for an ML-Assisted Resource Allocation: An Exact Analytical Framework ML 辅助资源分配的中断性能和新损失函数:精确分析框架
Pub Date : 2024-02-22 DOI: 10.1109/TMLCN.2024.3369007
Nidhi Simmons;David E. Simmons;Michel Daoud Yacoub
In this paper, we present Machine Learning (ML) solutions to address the reliability challenges likely to be encountered in advanced wireless systems (5G, 6G, and indeed beyond). Specifically, we introduce a novel loss function to minimize the outage probability of an ML-based resource allocation system. A single-user multi-resource greedy allocation strategy constitutes our application scenario, for which an ML binary classification predictor assists in selecting a resource satisfying the established outage criterium. While other resource allocation policies may be suitable, they are not the focus of our study. Instead, our primary emphasis is on theoretically developing this loss function and leveraging it to train an ML model to address the outage probability challenge. With no access to future channel state information, this predictor foresees each resource’s likely future outage status. When the predictor encounters a resource it believes will be satisfactory, it allocates it to the user. The predictor aims to ensure that a user avoids resources likely to undergo an outage. Our main result establishes exact and asymptotic expressions for this system’s outage probability. These expressions reveal that focusing solely on the optimization of the per-resource outage probability conditioned on the ML predictor recommending resource allocation (a strategy that - at face value - looks to be the most appropriate) may produce inadequate predictors that reject every resource. They also reveal that focusing on standard metrics, like precision, false-positive rate, or recall, may not produce optimal predictors. With our result, we formulate a theoretically optimal, differentiable loss function to train our predictor. We then compare predictors trained using this and traditional loss functions namely, binary cross-entropy (BCE), mean squared error (MSE), and mean absolute error (MAE). In all scenarios, predictors trained using our novel loss function provide superior outage probability performance. Moreover, in some cases, our loss function outperforms predictors trained with BCE, MAE, and MSE by multiple orders of magnitude. Additionally, when applied to another ML-based resource allocation scheme (a modified greedy algorithm), our proposed loss function maintains its efficacy.
在本文中,我们提出了机器学习(ML)解决方案,以应对先进无线系统(5G、6G,甚至更远)中可能遇到的可靠性挑战。具体来说,我们引入了一种新颖的损失函数,以最小化基于 ML 的资源分配系统的中断概率。单用户多资源贪婪分配策略构成了我们的应用场景,ML 二进制分类预测器可帮助选择满足既定中断标准的资源。虽然其他资源分配策略也可能适用,但它们并不是我们研究的重点。相反,我们的主要重点是在理论上开发这种损失函数,并利用它来训练一个 ML 模型,以解决中断概率挑战。在无法获取未来信道状态信息的情况下,该预测器可以预见每个资源未来可能的中断状态。当预测器遇到它认为可以满足要求的资源时,就会将其分配给用户。预测器旨在确保用户避开可能发生中断的资源。我们的主要结果为该系统的中断概率建立了精确的渐近表达式。这些表达式揭示出,如果只关注以 ML 预测器推荐资源分配为条件的每资源中断概率的优化(从表面上看,这种策略似乎是最合适的),可能会产生拒绝每种资源的不适当预测器。他们还揭示出,只关注精确度、假阳性率或召回率等标准指标可能不会产生最佳预测结果。根据我们的结果,我们提出了一个理论上最优的可微分损失函数来训练我们的预测器。然后,我们比较了使用该损失函数和传统损失函数(即二元交叉熵(BCE)、均方误差(MSE)和平均绝对误差(MAE))训练的预测器。在所有情况下,使用我们的新型损失函数训练的预测器都能提供更优越的中断概率性能。此外,在某些情况下,我们的损失函数比使用 BCE、MAE 和 MSE 训练的预测器性能高出多个数量级。此外,当应用于另一种基于 ML 的资源分配方案(一种改进的贪婪算法)时,我们提出的损失函数仍能保持其功效。
{"title":"Outage Performance and Novel Loss Function for an ML-Assisted Resource Allocation: An Exact Analytical Framework","authors":"Nidhi Simmons;David E. Simmons;Michel Daoud Yacoub","doi":"10.1109/TMLCN.2024.3369007","DOIUrl":"https://doi.org/10.1109/TMLCN.2024.3369007","url":null,"abstract":"In this paper, we present Machine Learning (ML) solutions to address the reliability challenges likely to be encountered in advanced wireless systems (5G, 6G, and indeed beyond). Specifically, we introduce a novel loss function to minimize the outage probability of an ML-based resource allocation system. A single-user multi-resource greedy allocation strategy constitutes our application scenario, for which an ML binary classification predictor assists in selecting a resource satisfying the established outage criterium. While other resource allocation policies may be suitable, they are not the focus of our study. Instead, our primary emphasis is on theoretically developing this loss function and leveraging it to train an ML model to address the outage probability challenge. With no access to future channel state information, this predictor foresees each resource’s likely future outage status. When the predictor encounters a resource it believes will be satisfactory, it allocates it to the user. The predictor aims to ensure that a user avoids resources likely to undergo an outage. Our main result establishes exact and asymptotic expressions for this system’s outage probability. These expressions reveal that focusing solely on the optimization of the per-resource outage probability conditioned on the ML predictor recommending resource allocation (a strategy that - at face value - looks to be the most appropriate) may produce inadequate predictors that reject every resource. They also reveal that focusing on standard metrics, like precision, false-positive rate, or recall, may not produce optimal predictors. With our result, we formulate a theoretically optimal, differentiable loss function to train our predictor. We then compare predictors trained using this and traditional loss functions namely, binary cross-entropy (BCE), mean squared error (MSE), and mean absolute error (MAE). In all scenarios, predictors trained using our novel loss function provide superior outage probability performance. Moreover, in some cases, our loss function outperforms predictors trained with BCE, MAE, and MSE by multiple orders of magnitude. Additionally, when applied to another ML-based resource allocation scheme (a modified greedy algorithm), our proposed loss function maintains its efficacy.","PeriodicalId":100641,"journal":{"name":"IEEE Transactions on Machine Learning in Communications and Networking","volume":"2 ","pages":"335-350"},"PeriodicalIF":0.0,"publicationDate":"2024-02-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10443669","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140042854","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
On Learning Generalized Wireless MAC Communication Protocols via a Feasible Multi-Agent Reinforcement Learning Framework 论通过可行的多代理强化学习框架学习通用无线 MAC 通信协议
Pub Date : 2024-02-20 DOI: 10.1109/TMLCN.2024.3368367
Luciano Miuccio;Salvatore Riolo;Sumudu Samarakoon;Mehdi Bennis;Daniela Panno
Automatically learning medium access control (MAC) communication protocols via multi-agent reinforcement learning (MARL) has received huge attention to cater to the extremely diverse real-world scenarios expected in 6G wireless networks. Several state-of-the-art solutions adopt the centralized training with decentralized execution (CTDE) learning method, where agents learn optimal MAC protocols by exploiting the information exchanged with a central unit. Despite the promising results achieved in these works, two notable challenges are neglected. First, these works were designed to be trained in computer simulations assuming an omniscient environment and neglecting communication overhead issues, thus making the implementation impractical in real-world scenarios. Second, the learned protocols fail to generalize outside of the scenario they were trained on. In this paper, we propose a new feasible learning framework that enables practical implementations of training procedures, thus allowing learned MAC protocols to be tailor-made for the scenario where they will be executed. Moreover, to address the second challenge, we leverage the concept of state abstraction and imbue it into the MARL framework for better generalization. As a result, the policies are learned in an abstracted observation space that contains only useful information extracted from the original high-dimensional and redundant observation space. Simulation results show that our feasible learning framework exhibits performance comparable to that of the infeasible solutions. In addition, the learning frameworks adopting observation abstraction offer better generalization capabilities, in terms of the number of UEs, number of data packets to transmit, and channel conditions.
通过多代理强化学习(MARL)自动学习介质访问控制(MAC)通信协议,以应对 6G 无线网络中极其多样化的实际应用场景,受到了广泛关注。一些最先进的解决方案采用了集中训练与分散执行(CTDE)学习方法,即代理通过利用与中央单元交换的信息来学习最佳 MAC 协议。尽管这些工作取得了可喜的成果,但有两个值得注意的挑战却被忽视了。首先,这些工作都是在计算机模拟中进行训练的,假设环境是全知的,并忽略了通信开销问题,因此在现实世界中的实施并不可行。其次,学习到的协议在训练场景之外无法通用。在本文中,我们提出了一种新的可行的学习框架,它能使训练程序切实可行,从而使学习到的 MAC 协议能为将要执行的场景量身定制。此外,为了应对第二个挑战,我们利用了状态抽象的概念,并将其融入 MARL 框架,以实现更好的泛化。因此,策略是在抽象观察空间中学习的,该空间只包含从原始高维冗余观察空间中提取的有用信息。仿真结果表明,我们的可行学习框架表现出与不可行解决方案相当的性能。此外,在 UE 数量、要传输的数据包数量和信道条件方面,采用观测抽象的学习框架具有更好的泛化能力。
{"title":"On Learning Generalized Wireless MAC Communication Protocols via a Feasible Multi-Agent Reinforcement Learning Framework","authors":"Luciano Miuccio;Salvatore Riolo;Sumudu Samarakoon;Mehdi Bennis;Daniela Panno","doi":"10.1109/TMLCN.2024.3368367","DOIUrl":"https://doi.org/10.1109/TMLCN.2024.3368367","url":null,"abstract":"Automatically learning medium access control (MAC) communication protocols via multi-agent reinforcement learning (MARL) has received huge attention to cater to the extremely diverse real-world scenarios expected in 6G wireless networks. Several state-of-the-art solutions adopt the centralized training with decentralized execution (CTDE) learning method, where agents learn optimal MAC protocols by exploiting the information exchanged with a central unit. Despite the promising results achieved in these works, two notable challenges are neglected. First, these works were designed to be trained in computer simulations assuming an omniscient environment and neglecting communication overhead issues, thus making the implementation impractical in real-world scenarios. Second, the learned protocols fail to generalize outside of the scenario they were trained on. In this paper, we propose a new feasible learning framework that enables practical implementations of training procedures, thus allowing learned MAC protocols to be tailor-made for the scenario where they will be executed. Moreover, to address the second challenge, we leverage the concept of state abstraction and imbue it into the MARL framework for better generalization. As a result, the policies are learned in an abstracted observation space that contains only useful information extracted from the original high-dimensional and redundant observation space. Simulation results show that our feasible learning framework exhibits performance comparable to that of the infeasible solutions. In addition, the learning frameworks adopting observation abstraction offer better generalization capabilities, in terms of the number of UEs, number of data packets to transmit, and channel conditions.","PeriodicalId":100641,"journal":{"name":"IEEE Transactions on Machine Learning in Communications and Networking","volume":"2 ","pages":"298-317"},"PeriodicalIF":0.0,"publicationDate":"2024-02-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10440615","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140000519","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Getting the Best Out of Both Worlds: Algorithms for Hierarchical Inference at the Edge 两全其美:边缘分层推理算法
Pub Date : 2024-02-14 DOI: 10.1109/TMLCN.2024.3366501
Vishnu Narayanan Moothedath;Jaya Prakash Champati;James Gross
We consider a resource-constrained Edge Device (ED), such as an IoT sensor or a microcontroller unit, embedded with a small-size ML model (S-ML) for a generic classification application and an Edge Server (ES) that hosts a large-size ML model (L-ML). Since the inference accuracy of S-ML is lower than that of the L-ML, offloading all the data samples to the ES results in high inference accuracy, but it defeats the purpose of embedding S-ML on the ED and deprives the benefits of reduced latency, bandwidth savings, and energy efficiency of doing local inference. In order to get the best out of both worlds, i.e., the benefits of doing inference on the ED and the benefits of doing inference on ES, we explore the idea of Hierarchical Inference (HI), wherein S-ML inference is only accepted when it is correct, otherwise the data sample is offloaded for L-ML inference. However, the ideal implementation of HI is infeasible as the correctness of the S-ML inference is not known to the ED. We thus propose an online meta-learning framework that the ED can use to predict the correctness of the S-ML inference. In particular, we propose to use the probability corresponding to the maximum probability class output by S-ML for a data sample and decide whether to offload it or not. The resulting online learning problem turns out to be a Prediction with Expert Advice (PEA) problem with continuous expert space. For a full feedback scenario, where the ED receives feedback on the correctness of the S-ML once it accepts the inference, we propose the HIL-F algorithm and prove a sublinear regret bound $sqrt {nln (1/lambda _{text {min}})/2}$ without any assumption on the smoothness of the loss function, where $n$ is the number of data samples and $lambda _{text {min}}$ is the minimum difference between any two distinct maximum probability values across the data samples. For a no-local feedback scenario, where the ED does not receive the ground truth for the classification, we propose the HIL-N algorithm and prove that it has $Oleft ({n^{2/{3}}ln ^{1/{3}}(1/lambda _{text {min}})}right)$ regret bound. We evaluate and benchmark the performance of the proposed algorithms for image classification application using four datasets, namely, Imagenette and Imagewoof, MNIST, and CIFAR-10.
我们考虑了一个资源受限的边缘设备(Edge Device,ED),如一个物联网传感器或微控制器单元,嵌入了一个用于通用分类应用的小型 ML 模型(S-ML)和一个承载大型 ML 模型(L-ML)的边缘服务器(Edge Server,ES)。由于 S-ML 的推理准确率低于 L-ML,因此将所有数据样本卸载到 ES 可以获得较高的推理准确率,但这有悖于在 ED 上嵌入 S-ML 的初衷,同时也失去了本地推理所带来的减少延迟、节省带宽和提高能效的优势。为了两全其美,即既能享受在 ED 上进行推理的好处,又能享受在 ES 上进行推理的好处,我们探索了分层推理(HI)的想法,即只有在 S-ML 推理正确时才接受它,否则数据样本将被卸载,用于 L-ML 推理。然而,HI 的理想实现并不可行,因为 ED 并不知道 S-ML 推论的正确性。因此,我们提出了一个在线元学习框架,ED 可以利用它来预测 S-ML 推论的正确性。特别是,我们建议使用 S-ML 对数据样本输出的最大概率类对应的概率,并决定是否卸载它。由此产生的在线学习问题变成了一个具有连续专家空间的专家建议预测(PEA)问题。在完全反馈的情况下,ED 一旦接受推理,就会收到关于 S-ML 正确性的反馈,我们提出了 HIL-F 算法,并证明了一个亚线性遗憾约束 $/sqrt {nlambda _{text {min})/2}$ 而无需假设损失函数的平滑性、其中,$n$ 是数据样本的数量,$lambda _{text {min}}$ 是数据样本中任意两个不同最大概率值之间的最小差值。在无本地反馈的情况下,即 ED 没有收到分类的基本事实,我们提出了 HIL-N 算法,并证明该算法具有 $Oleft ({n^{2/{3}}ln ^{1/{3}}(1/lambda _text {min}})}right)$ 的遗憾约束。我们使用四个数据集,即Imagenette和Imagewoof、MNIST和CIFAR-10,对所提算法在图像分类应用中的性能进行了评估和基准测试。
{"title":"Getting the Best Out of Both Worlds: Algorithms for Hierarchical Inference at the Edge","authors":"Vishnu Narayanan Moothedath;Jaya Prakash Champati;James Gross","doi":"10.1109/TMLCN.2024.3366501","DOIUrl":"https://doi.org/10.1109/TMLCN.2024.3366501","url":null,"abstract":"We consider a resource-constrained Edge Device (ED), such as an IoT sensor or a microcontroller unit, embedded with a small-size ML model (S-ML) for a generic classification application and an Edge Server (ES) that hosts a large-size ML model (L-ML). Since the inference accuracy of S-ML is lower than that of the L-ML, offloading all the data samples to the ES results in high inference accuracy, but it defeats the purpose of embedding S-ML on the ED and deprives the benefits of reduced latency, bandwidth savings, and energy efficiency of doing local inference. In order to get the best out of both worlds, i.e., the benefits of doing inference on the ED and the benefits of doing inference on ES, we explore the idea of Hierarchical Inference (HI), wherein S-ML inference is only accepted when it is correct, otherwise the data sample is offloaded for L-ML inference. However, the ideal implementation of HI is infeasible as the correctness of the S-ML inference is not known to the ED. We thus propose an online meta-learning framework that the ED can use to predict the correctness of the S-ML inference. In particular, we propose to use the probability corresponding to the maximum probability class output by S-ML for a data sample and decide whether to offload it or not. The resulting online learning problem turns out to be a Prediction with Expert Advice (PEA) problem with continuous expert space. For a full feedback scenario, where the ED receives feedback on the correctness of the S-ML once it accepts the inference, we propose the HIL-F algorithm and prove a sublinear regret bound \u0000<inline-formula> <tex-math>$sqrt {nln (1/lambda _{text {min}})/2}$ </tex-math></inline-formula>\u0000 without any assumption on the smoothness of the loss function, where \u0000<inline-formula> <tex-math>$n$ </tex-math></inline-formula>\u0000 is the number of data samples and \u0000<inline-formula> <tex-math>$lambda _{text {min}}$ </tex-math></inline-formula>\u0000 is the minimum difference between any two distinct maximum probability values across the data samples. For a no-local feedback scenario, where the ED does not receive the ground truth for the classification, we propose the HIL-N algorithm and prove that it has \u0000<inline-formula> <tex-math>$Oleft ({n^{2/{3}}ln ^{1/{3}}(1/lambda _{text {min}})}right)$ </tex-math></inline-formula>\u0000 regret bound. We evaluate and benchmark the performance of the proposed algorithms for image classification application using four datasets, namely, Imagenette and Imagewoof, MNIST, and CIFAR-10.","PeriodicalId":100641,"journal":{"name":"IEEE Transactions on Machine Learning in Communications and Networking","volume":"2 ","pages":"280-297"},"PeriodicalIF":0.0,"publicationDate":"2024-02-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10436693","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139937152","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Stealthy Adversarial Attacks on Machine Learning-Based Classifiers of Wireless Signals 对基于机器学习的无线信号分类器的隐蔽性对抗攻击
Pub Date : 2024-02-13 DOI: 10.1109/TMLCN.2024.3366161
Wenhan Zhang;Marwan Krunz;Gregory Ditzler
Machine learning (ML) has been successfully applied to classification tasks in many domains, including computer vision, cybersecurity, and communications. Although highly accurate classifiers have been developed, research shows that these classifiers are, in general, vulnerable to adversarial machine learning (AML) attacks. In one type of AML attack, the adversary trains a surrogate classifier (called the attacker’s classifier) to produce intelligently crafted low-power “perturbations” that degrade the accuracy of the targeted (defender’s) classifier. In this paper, we focus on radio frequency (RF) signal classifiers, and study their vulnerabilities to AML attacks. Specifically, we consider several exemplary protocol and modulation classifiers, designed using convolutional neural networks (CNNs) and recurrent neural networks (RNNs). We first show the high accuracy of such classifiers under random noise (AWGN). We then study their performance under three types of low-power AML perturbations (FGSM, PGD, and DeepFool), considering different amounts of information at the attacker. On one extreme (so-called “white-box” attack), the attacker has complete knowledge of the defender’s classifier and its training data. As expected, our results reveal that in this case, the AML attack significantly degrades the defender’s classification accuracy. We gradually reduce the attacker’s knowledge and study five attack scenarios that represent different amounts of information at the attacker. Surprisingly, even when the attacker has limited or no knowledge of the defender’s classifier and its power is relatively low, the attack is still significant. We also study various practical issues related to the wireless environment, including channel impairments and misalignment between attacker and transmitter signals. Furthermore, we study the effectiveness of intermittent AML attacks. Even under such imperfections, a low-power AML attack can still significantly reduce the defender’s classification accuracy for both protocol and modulation classifiers. Lastly, we propose a two-step adversarial training mechanism to defend against AML attacks and contrast its performance against other state-of-the-art defense strategies. The proposed defense approach increases the classification accuracy by up to 50%, even in scenarios where the attacker has perfect knowledge of the defender and exhibits a relatively large power budget.
机器学习(ML)已成功应用于计算机视觉、网络安全和通信等多个领域的分类任务。虽然已经开发出了高精度的分类器,但研究表明,这些分类器一般容易受到对抗性机器学习(AML)攻击。在一种对抗式机器学习攻击中,对手会训练一个代理分类器(称为攻击者分类器)来产生智能制作的低功耗 "扰动",从而降低目标(防御者)分类器的准确性。在本文中,我们将重点关注射频 (RF) 信号分类器,并研究它们在反洗钱攻击面前的脆弱性。具体来说,我们考虑了几个使用卷积神经网络(CNN)和递归神经网络(RNN)设计的示例协议和调制分类器。我们首先展示了这些分类器在随机噪声(AWGN)条件下的高准确性。然后,我们研究了它们在三种低功耗 AML 扰动(FGSM、PGD 和 DeepFool)下的性能,并考虑了攻击者的不同信息量。在一个极端(所谓的 "白盒 "攻击)中,攻击者完全了解防御者的分类器及其训练数据。不出所料,我们的结果表明,在这种情况下,反洗钱攻击会显著降低防御者的分类准确性。我们逐步减少攻击者的知识,并研究了代表攻击者不同信息量的五种攻击情况。令人惊讶的是,即使攻击者对防御者的分类器了解有限或完全不了解,而且其能力相对较低,攻击仍然很明显。我们还研究了与无线环境有关的各种实际问题,包括信道损伤以及攻击者和发射器信号之间的错位。此外,我们还研究了间歇性反洗钱攻击的有效性。即使在这种不完善的情况下,低功率反洗钱攻击仍能显著降低防御者对协议和调制分类器的分类准确性。最后,我们提出了一种两步对抗训练机制来防御反洗钱攻击,并将其性能与其他最先进的防御策略进行了对比。即使在攻击者完全了解防御者并表现出相对较大的功率预算的情况下,所提出的防御方法也能将分类准确率提高多达 50%。
{"title":"Stealthy Adversarial Attacks on Machine Learning-Based Classifiers of Wireless Signals","authors":"Wenhan Zhang;Marwan Krunz;Gregory Ditzler","doi":"10.1109/TMLCN.2024.3366161","DOIUrl":"https://doi.org/10.1109/TMLCN.2024.3366161","url":null,"abstract":"Machine learning (ML) has been successfully applied to classification tasks in many domains, including computer vision, cybersecurity, and communications. Although highly accurate classifiers have been developed, research shows that these classifiers are, in general, vulnerable to adversarial machine learning (AML) attacks. In one type of AML attack, the adversary trains a surrogate classifier (called the attacker’s classifier) to produce intelligently crafted low-power “perturbations” that degrade the accuracy of the targeted (defender’s) classifier. In this paper, we focus on radio frequency (RF) signal classifiers, and study their vulnerabilities to AML attacks. Specifically, we consider several exemplary protocol and modulation classifiers, designed using convolutional neural networks (CNNs) and recurrent neural networks (RNNs). We first show the high accuracy of such classifiers under random noise (AWGN). We then study their performance under three types of low-power AML perturbations (FGSM, PGD, and DeepFool), considering different amounts of information at the attacker. On one extreme (so-called “white-box” attack), the attacker has complete knowledge of the defender’s classifier and its training data. As expected, our results reveal that in this case, the AML attack significantly degrades the defender’s classification accuracy. We gradually reduce the attacker’s knowledge and study five attack scenarios that represent different amounts of information at the attacker. Surprisingly, even when the attacker has limited or no knowledge of the defender’s classifier and its power is relatively low, the attack is still significant. We also study various practical issues related to the wireless environment, including channel impairments and misalignment between attacker and transmitter signals. Furthermore, we study the effectiveness of intermittent AML attacks. Even under such imperfections, a low-power AML attack can still significantly reduce the defender’s classification accuracy for both protocol and modulation classifiers. Lastly, we propose a two-step adversarial training mechanism to defend against AML attacks and contrast its performance against other state-of-the-art defense strategies. The proposed defense approach increases the classification accuracy by up to 50%, even in scenarios where the attacker has perfect knowledge of the defender and exhibits a relatively large power budget.","PeriodicalId":100641,"journal":{"name":"IEEE Transactions on Machine Learning in Communications and Networking","volume":"2 ","pages":"261-279"},"PeriodicalIF":0.0,"publicationDate":"2024-02-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10436107","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139937058","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Buyers Collusion in Incentivized Forwarding Networks: A Multi-Agent Reinforcement Learning Study 激励转发网络中的买家串通:多代理强化学习研究
Pub Date : 2024-02-12 DOI: 10.1109/TMLCN.2024.3365420
Mostafa Ibrahim;Sabit Ekin;Ali Imran
We present the issue of monetarily incentivized forwarding in a multi-hop mesh network architecture from an economic perspective. It is anticipated that credit-incentivized forwarding and relaying will be a simple method of exchanging transmission power and spectrum for connectivity. However, gateways and forwarding nodes, like any other free market, may create an oligopolistic market for the users they serve. In this study, a coalition scheme between buyers aims to address price control by gateways or nodes closer to gateways. In a Stackelberg competition game, buyer agents (users) and sellers (gateways) make decisions using reinforcement learning (RL), with decentralized Deep Q-Networks to buy and sell forwarding resources. We allow communication links between the buyers with a limited messaging space, without defining a collusion mechanism. The idea is to demonstrate that through messaging, and RL tacit collusion can emerge between agents in a decentralized setup. The multi-agent reinforcement learning (MARL) system is presented and analyzed from a machine-learning perspective. Moreover, MARL dynamics are discussed via mean field analysis to better understand divergence causes and make implementation recommendations for such systems. Finally, the simulation results show the results of coordination among the users.
我们从经济学角度介绍了多跳网状网络架构中的货币激励转发问题。我们预计,信用激励转发和中继将是交换传输功率和频谱以实现连接的一种简单方法。然而,网关和转发节点与其他自由市场一样,可能会为其服务的用户创造一个寡头垄断市场。在本研究中,买方之间的联盟计划旨在解决网关或更靠近网关的节点的价格控制问题。在斯塔克尔伯格竞争博弈中,买方代理(用户)和卖方(网关)利用强化学习(RL)做出决策,并通过分散的深度 Q 网络来买卖转发资源。我们允许买方之间在有限的信息空间内建立通信联系,但不定义串通机制。我们的想法是证明,通过信息传递和 RL,可以在分散设置的代理之间形成默契串通。从机器学习的角度介绍并分析了多代理强化学习(MARL)系统。此外,还通过均值场分析讨论了 MARL 动态,以更好地理解分歧原因,并为此类系统提出实施建议。最后,模拟结果显示了用户之间的协调结果。
{"title":"Buyers Collusion in Incentivized Forwarding Networks: A Multi-Agent Reinforcement Learning Study","authors":"Mostafa Ibrahim;Sabit Ekin;Ali Imran","doi":"10.1109/TMLCN.2024.3365420","DOIUrl":"https://doi.org/10.1109/TMLCN.2024.3365420","url":null,"abstract":"We present the issue of monetarily incentivized forwarding in a multi-hop mesh network architecture from an economic perspective. It is anticipated that credit-incentivized forwarding and relaying will be a simple method of exchanging transmission power and spectrum for connectivity. However, gateways and forwarding nodes, like any other free market, may create an oligopolistic market for the users they serve. In this study, a coalition scheme between buyers aims to address price control by gateways or nodes closer to gateways. In a Stackelberg competition game, buyer agents (users) and sellers (gateways) make decisions using reinforcement learning (RL), with decentralized Deep Q-Networks to buy and sell forwarding resources. We allow communication links between the buyers with a limited messaging space, without defining a collusion mechanism. The idea is to demonstrate that through messaging, and RL tacit collusion can emerge between agents in a decentralized setup. The multi-agent reinforcement learning (MARL) system is presented and analyzed from a machine-learning perspective. Moreover, MARL dynamics are discussed via mean field analysis to better understand divergence causes and make implementation recommendations for such systems. Finally, the simulation results show the results of coordination among the users.","PeriodicalId":100641,"journal":{"name":"IEEE Transactions on Machine Learning in Communications and Networking","volume":"2 ","pages":"240-260"},"PeriodicalIF":0.0,"publicationDate":"2024-02-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10433203","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139908570","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
GAN-Based Evasion Attack in Filtered Multicarrier Waveforms Systems 滤波多载波波形系统中基于 GAN 的规避攻击
Pub Date : 2024-02-02 DOI: 10.1109/TMLCN.2024.3361834
Kawtar Zerhouni;Gurjot Singh Gaba;Mustapha Hedabou;Taras Maksymyuk;Andrei Gurtov;El Mehdi Amhoud
Generative adversarial networks (GANs), a category of deep learning models, have become a cybersecurity concern for wireless communication systems. These networks enable potential attackers to deceive receivers that rely on convolutional neural networks (CNNs) by transmitting deceptive wireless signals that are statistically indistinguishable from genuine ones. While GANs have been used before for digitally modulated single-carrier waveforms, this study explores their applicability to model filtered multi-carrier waveforms, such as orthogonal frequency-division multiplexing (OFDM), filtered orthogonal FDM (F-OFDM), generalized FDM (GFDM), filter bank multi-carrier (FBMC), and universal filtered MC (UFMC). In this research, an evasion attack is conducted using GAN-generated counterfeit filtered multi-carrier signals to trick the target receiver. The results show that there is a remarkable 99.7% probability of the receiver misclassifying these GAN-based fabricated signals as authentic ones. This highlights the need for urgent investigation into the development of preventive measures to address this concerning vulnerability.
生成对抗网络(GANs)是一类深度学习模型,已成为无线通信系统的网络安全问题。这些网络使潜在的攻击者能够欺骗依赖卷积神经网络(CNN)的接收器,传输在统计上与真实信号无法区分的欺骗性无线信号。虽然 GAN 以前曾用于数字调制的单载波波形,但本研究探讨了 GAN 对滤波多载波波形建模的适用性,如正交频分复用 (OFDM)、滤波正交 FDM (F-OFDM)、广义 FDM (GFDM)、滤波器组多载波 (FBMC) 和通用滤波 MC (UFMC)。在这项研究中,利用 GAN 生成的伪造滤波多载波信号来欺骗目标接收器,从而进行规避攻击。结果表明,接收器将这些基于 GAN 的伪造信号误判为真实信号的概率高达 99.7%。这突出表明,迫切需要研究制定预防措施来解决这一令人担忧的漏洞。
{"title":"GAN-Based Evasion Attack in Filtered Multicarrier Waveforms Systems","authors":"Kawtar Zerhouni;Gurjot Singh Gaba;Mustapha Hedabou;Taras Maksymyuk;Andrei Gurtov;El Mehdi Amhoud","doi":"10.1109/TMLCN.2024.3361834","DOIUrl":"https://doi.org/10.1109/TMLCN.2024.3361834","url":null,"abstract":"Generative adversarial networks (GANs), a category of deep learning models, have become a cybersecurity concern for wireless communication systems. These networks enable potential attackers to deceive receivers that rely on convolutional neural networks (CNNs) by transmitting deceptive wireless signals that are statistically indistinguishable from genuine ones. While GANs have been used before for digitally modulated single-carrier waveforms, this study explores their applicability to model filtered multi-carrier waveforms, such as orthogonal frequency-division multiplexing (OFDM), filtered orthogonal FDM (F-OFDM), generalized FDM (GFDM), filter bank multi-carrier (FBMC), and universal filtered MC (UFMC). In this research, an evasion attack is conducted using GAN-generated counterfeit filtered multi-carrier signals to trick the target receiver. The results show that there is a remarkable 99.7% probability of the receiver misclassifying these GAN-based fabricated signals as authentic ones. This highlights the need for urgent investigation into the development of preventive measures to address this concerning vulnerability.","PeriodicalId":100641,"journal":{"name":"IEEE Transactions on Machine Learning in Communications and Networking","volume":"2 ","pages":"210-220"},"PeriodicalIF":0.0,"publicationDate":"2024-02-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10419091","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139715175","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Unleashing the Potential of Knowledge Distillation for IoT Traffic Classification 释放知识提炼在物联网流量分类中的潜力
Pub Date : 2024-01-31 DOI: 10.1109/TMLCN.2024.3360915
Mahmoud Abbasi;Amin Shahraki;Javier Prieto;Angélica González Arrieta;Juan M. Corchado
The Internet of Things (IoT) has revolutionized our lives by generating large amounts of data, however, the data needs to be collected, processed, and analyzed in real-time. Network Traffic Classification (NTC) in IoT is a crucial step for optimizing network performance, enhancing security, and improving user experience. Different methods are introduced for NTC, but recently Machine Learning solutions have received high attention in this field, however, Traditional Machine Learning (ML) methods struggle with the complexity and heterogeneity of IoT traffic, as well as the limited resources of IoT devices. Deep learning shows promise but is computationally intensive for resource-constrained IoT devices. Knowledge distillation is a solution to help ML by compressing complex models into smaller ones suitable for IoT devices. In this paper, we examine the use of knowledge distillation for IoT traffic classification. Through experiments, we show that the student model achieves a balance between accuracy and efficiency. It exhibits similar accuracy to the larger teacher model while maintaining a smaller size. This makes it a suitable alternative for resource-constrained scenarios like mobile or IoT traffic classification. We find that the knowledge distillation technique effectively transfers knowledge from the teacher model to the student model, even with reduced training data. The results also demonstrate the robustness of the approach, as the student model performs well even with the removal of certain classes. Additionally, we highlight the trade-off between model capacity and computational cost, suggesting that increasing model size beyond a certain point may not be beneficial. The findings emphasize the value of soft labels in training student models with limited data resources.
物联网(IoT)通过产生大量数据彻底改变了我们的生活,但这些数据需要实时收集、处理和分析。物联网中的网络流量分类(NTC)是优化网络性能、增强安全性和改善用户体验的关键步骤。针对 NTC 引入了不同的方法,但最近机器学习解决方案在这一领域受到高度关注,然而,传统的机器学习(ML)方法难以应对物联网流量的复杂性和异构性,以及物联网设备的有限资源。深度学习大有可为,但对于资源有限的物联网设备来说,其计算密集度较高。知识提炼是一种帮助 ML 的解决方案,它能将复杂的模型压缩成适合物联网设备的较小模型。本文研究了知识蒸馏在物联网流量分类中的应用。通过实验,我们发现学生模型实现了准确性和效率之间的平衡。它的准确性与较大的教师模型相似,同时保持了较小的规模。这使它成为移动或物联网流量分类等资源受限场景的合适选择。我们发现,即使在训练数据减少的情况下,知识蒸馏技术也能有效地将知识从教师模型转移到学生模型。结果还证明了该方法的鲁棒性,因为即使删除某些类别,学生模型也能表现出色。此外,我们还强调了模型容量和计算成本之间的权衡,表明模型规模的增加超过一定程度可能并无益处。研究结果强调了软标签在利用有限数据资源训练学生模型方面的价值。
{"title":"Unleashing the Potential of Knowledge Distillation for IoT Traffic Classification","authors":"Mahmoud Abbasi;Amin Shahraki;Javier Prieto;Angélica González Arrieta;Juan M. Corchado","doi":"10.1109/TMLCN.2024.3360915","DOIUrl":"https://doi.org/10.1109/TMLCN.2024.3360915","url":null,"abstract":"The Internet of Things (IoT) has revolutionized our lives by generating large amounts of data, however, the data needs to be collected, processed, and analyzed in real-time. Network Traffic Classification (NTC) in IoT is a crucial step for optimizing network performance, enhancing security, and improving user experience. Different methods are introduced for NTC, but recently Machine Learning solutions have received high attention in this field, however, Traditional Machine Learning (ML) methods struggle with the complexity and heterogeneity of IoT traffic, as well as the limited resources of IoT devices. Deep learning shows promise but is computationally intensive for resource-constrained IoT devices. Knowledge distillation is a solution to help ML by compressing complex models into smaller ones suitable for IoT devices. In this paper, we examine the use of knowledge distillation for IoT traffic classification. Through experiments, we show that the student model achieves a balance between accuracy and efficiency. It exhibits similar accuracy to the larger teacher model while maintaining a smaller size. This makes it a suitable alternative for resource-constrained scenarios like mobile or IoT traffic classification. We find that the knowledge distillation technique effectively transfers knowledge from the teacher model to the student model, even with reduced training data. The results also demonstrate the robustness of the approach, as the student model performs well even with the removal of certain classes. Additionally, we highlight the trade-off between model capacity and computational cost, suggesting that increasing model size beyond a certain point may not be beneficial. The findings emphasize the value of soft labels in training student models with limited data resources.","PeriodicalId":100641,"journal":{"name":"IEEE Transactions on Machine Learning in Communications and Networking","volume":"2 ","pages":"221-239"},"PeriodicalIF":0.0,"publicationDate":"2024-01-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10417087","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139715176","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Joint Active and Passive Beamforming for IRS-Assisted Monostatic Backscatter Systems: An Unsupervised Learning Approach IRS 辅助单静态反向散射系统的联合主动和被动波束成形:无监督学习方法
Pub Date : 2024-01-17 DOI: 10.1109/TMLCN.2024.3355317
Sahar Idrees;Salman Durrani;Zhiwei Xu;Xiaolun Jia;Xiangyun Zhou
Backscatter Communication (BackCom) has been envisioned as a key enabler for ubiquitous connectivity in the Internet of Things (IoT). However, the inherent issues of limited range and low achievable bit rate are prominent barriers to the widespread deployment of BackCom. In this work, we address these challenges by considering a monostatic BackCom system assisted by an intelligent reflecting surface (IRS) and controlled seamlessly by data driven deep learning (DL) based approach. We propose a deep residual neural network (DRCNN) BackIRS-Net that exploits the unique coupling between the IRS phase shifts and the beamforming at the reader, to jointly optimize these quantities in order to maximize the effective signal to noise ratio (SNR) of the backscatter signal received at the reader. We show that the performance of a trained BackIRS-Net is close to the conventional optimization based approach while requiring much less computational complexity and time, which indicates the utility of this scheme for real-time deployment. Our results show that an IRS of moderate size can significantly improve backscatter SNR, resulting in range extension by a factor of 4 for monostatic BackCom, which is an important improvement in the context of BackCom based IoT systems.
后向散射通信(BackCom)被视为物联网(IoT)中实现无处不在的连接的关键因素。然而,有限的范围和较低的可实现比特率等固有问题是广泛部署 BackCom 的突出障碍。在这项工作中,我们通过考虑一种由智能反射面(IRS)辅助的单静态 BackCom 系统,并通过基于数据驱动的深度学习(DL)方法进行无缝控制,来应对这些挑战。我们提出了一种深度残差神经网络(DRCNN)BackIRS-Net,它利用 IRS 相移与阅读器波束成形之间的独特耦合,共同优化这些量,以最大限度地提高阅读器接收到的反向散射信号的有效信噪比(SNR)。我们的研究表明,训练有素的 BackIRS-Net 的性能接近传统的基于优化的方法,而所需的计算复杂度和时间却大大减少,这表明该方案可用于实时部署。我们的结果表明,中等大小的 IRS 可以显著提高反向散射信噪比,从而将单静态 BackCom 的范围扩大 4 倍,这对于基于 BackCom 的物联网系统来说是一项重要改进。
{"title":"Joint Active and Passive Beamforming for IRS-Assisted Monostatic Backscatter Systems: An Unsupervised Learning Approach","authors":"Sahar Idrees;Salman Durrani;Zhiwei Xu;Xiaolun Jia;Xiangyun Zhou","doi":"10.1109/TMLCN.2024.3355317","DOIUrl":"https://doi.org/10.1109/TMLCN.2024.3355317","url":null,"abstract":"Backscatter Communication (BackCom) has been envisioned as a key enabler for ubiquitous connectivity in the Internet of Things (IoT). However, the inherent issues of limited range and low achievable bit rate are prominent barriers to the widespread deployment of BackCom. In this work, we address these challenges by considering a monostatic BackCom system assisted by an intelligent reflecting surface (IRS) and controlled seamlessly by data driven deep learning (DL) based approach. We propose a deep residual neural network (DRCNN) BackIRS-Net that exploits the unique coupling between the IRS phase shifts and the beamforming at the reader, to jointly optimize these quantities in order to maximize the effective signal to noise ratio (SNR) of the backscatter signal received at the reader. We show that the performance of a trained BackIRS-Net is close to the conventional optimization based approach while requiring much less computational complexity and time, which indicates the utility of this scheme for real-time deployment. Our results show that an IRS of moderate size can significantly improve backscatter SNR, resulting in range extension by a factor of 4 for monostatic BackCom, which is an important improvement in the context of BackCom based IoT systems.","PeriodicalId":100641,"journal":{"name":"IEEE Transactions on Machine Learning in Communications and Networking","volume":"2 ","pages":"1389-1403"},"PeriodicalIF":0.0,"publicationDate":"2024-01-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10401960","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142276439","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
IEEE Transactions on Machine Learning in Communications and Networking
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1