首页 > 最新文献

IEEE Transactions on Machine Learning in Communications and Networking最新文献

英文 中文
Asynchronous Real-Time Federated Learning for Anomaly Detection in Microservice Cloud Applications 微服务云应用中异步实时联邦学习的异常检测
Pub Date : 2025-01-09 DOI: 10.1109/TMLCN.2025.3527919
Mahsa Raeiszadeh;Amin Ebrahimzadeh;Roch H. Glitho;Johan Eker;Raquel A. F. Mini
The complexity and dynamicity of microservice architectures in cloud environments present substantial challenges to the reliability and availability of the services built on these architectures. Therefore, effective anomaly detection is crucial to prevent impending failures and resolve them promptly. Distributed data analysis techniques based on machine learning (ML) have recently gained attention in detecting anomalies in microservice systems. ML-based anomaly detection techniques mostly require centralized data collection and processing, which may raise scalability and computational issues in practice. In this paper, we propose an Asynchronous Real-Time Federated Learning (ART-FL) approach for anomaly detection in cloud-based microservice systems. In our approach, edge clients perform real-time learning with continuous streaming local data. At the edge clients, we model intra-service behaviors and inter-service dependencies in multi-source distributed data based on a Span Causal Graph (SCG) representation and train a model through a combination of Graph Neural Network (GNN) and Positive and Unlabeled (PU) learning. Our FL approach updates the global model in an asynchronous manner to achieve accurate and efficient anomaly detection, addressing computational overhead across diverse edge clients, including those that experience delays. Our trace-driven evaluations indicate that the proposed method outperforms the state-of-the-art anomaly detection methods by 4% in terms of $F_{1}$ -score while meeting the given time efficiency and scalability requirements.
云环境中微服务架构的复杂性和动态性对构建在这些架构上的服务的可靠性和可用性提出了重大挑战。因此,有效的异常检测对于预防即将发生的故障并及时解决至关重要。基于机器学习(ML)的分布式数据分析技术最近在微服务系统异常检测方面得到了广泛关注。基于机器学习的异常检测技术大多需要集中的数据收集和处理,这在实践中可能会带来可扩展性和计算问题。在本文中,我们提出了一种异步实时联邦学习(ART-FL)方法,用于基于云的微服务系统中的异常检测。在我们的方法中,边缘客户端使用连续的本地流数据执行实时学习。在边缘客户端,我们基于跨因果图(SCG)表示对多源分布式数据中的服务内行为和服务间依赖进行建模,并通过图神经网络(GNN)和正未标记(PU)学习的组合训练模型。我们的FL方法以异步方式更新全局模型,以实现准确高效的异常检测,解决不同边缘客户端的计算开销,包括那些经历延迟的客户端。我们的跟踪驱动评估表明,在满足给定的时间效率和可扩展性要求的情况下,所提出的方法在F_ bb_0 $ -score方面比最先进的异常检测方法高出4%。
{"title":"Asynchronous Real-Time Federated Learning for Anomaly Detection in Microservice Cloud Applications","authors":"Mahsa Raeiszadeh;Amin Ebrahimzadeh;Roch H. Glitho;Johan Eker;Raquel A. F. Mini","doi":"10.1109/TMLCN.2025.3527919","DOIUrl":"https://doi.org/10.1109/TMLCN.2025.3527919","url":null,"abstract":"The complexity and dynamicity of microservice architectures in cloud environments present substantial challenges to the reliability and availability of the services built on these architectures. Therefore, effective anomaly detection is crucial to prevent impending failures and resolve them promptly. Distributed data analysis techniques based on machine learning (ML) have recently gained attention in detecting anomalies in microservice systems. ML-based anomaly detection techniques mostly require centralized data collection and processing, which may raise scalability and computational issues in practice. In this paper, we propose an Asynchronous Real-Time Federated Learning (ART-FL) approach for anomaly detection in cloud-based microservice systems. In our approach, edge clients perform real-time learning with continuous streaming local data. At the edge clients, we model intra-service behaviors and inter-service dependencies in multi-source distributed data based on a Span Causal Graph (SCG) representation and train a model through a combination of Graph Neural Network (GNN) and Positive and Unlabeled (PU) learning. Our FL approach updates the global model in an asynchronous manner to achieve accurate and efficient anomaly detection, addressing computational overhead across diverse edge clients, including those that experience delays. Our trace-driven evaluations indicate that the proposed method outperforms the state-of-the-art anomaly detection methods by 4% in terms of <inline-formula> <tex-math>$F_{1}$ </tex-math></inline-formula>-score while meeting the given time efficiency and scalability requirements.","PeriodicalId":100641,"journal":{"name":"IEEE Transactions on Machine Learning in Communications and Networking","volume":"3 ","pages":"176-194"},"PeriodicalIF":0.0,"publicationDate":"2025-01-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10835399","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142993443","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Private Collaborative Edge Inference via Over-the-Air Computation
Pub Date : 2025-01-06 DOI: 10.1109/TMLCN.2025.3526551
Selim F. Yilmaz;Burak Hasircioğlu;Li Qiao;Denız Gündüz
We consider collaborative inference at the wireless edge, where each client’s model is trained independently on its local dataset. Clients are queried in parallel to make an accurate decision collaboratively. In addition to maximizing the inference accuracy, we also want to ensure the privacy of local models. To this end, we leverage the superposition property of the multiple access channel to implement bandwidth-efficient multi-user inference methods. We propose different methods for ensemble and multi-view classification that exploit over-the-air computation (OAC). We show that these schemes perform better than their orthogonal counterparts with statistically significant differences while using fewer resources and providing privacy guarantees. We also provide experimental results verifying the benefits of the proposed OAC approach to multi-user inference, and perform an ablation study to demonstrate the effectiveness of our design choices. We share the source code of the framework publicly on Github to facilitate further research and reproducibility.
{"title":"Private Collaborative Edge Inference via Over-the-Air Computation","authors":"Selim F. Yilmaz;Burak Hasircioğlu;Li Qiao;Denız Gündüz","doi":"10.1109/TMLCN.2025.3526551","DOIUrl":"https://doi.org/10.1109/TMLCN.2025.3526551","url":null,"abstract":"We consider collaborative inference at the wireless edge, where each client’s model is trained independently on its local dataset. Clients are queried in parallel to make an accurate decision collaboratively. In addition to maximizing the inference accuracy, we also want to ensure the privacy of local models. To this end, we leverage the superposition property of the multiple access channel to implement bandwidth-efficient multi-user inference methods. We propose different methods for ensemble and multi-view classification that exploit over-the-air computation (OAC). We show that these schemes perform better than their orthogonal counterparts with statistically significant differences while using fewer resources and providing privacy guarantees. We also provide experimental results verifying the benefits of the proposed OAC approach to multi-user inference, and perform an ablation study to demonstrate the effectiveness of our design choices. We share the source code of the framework publicly on Github to facilitate further research and reproducibility.","PeriodicalId":100641,"journal":{"name":"IEEE Transactions on Machine Learning in Communications and Networking","volume":"3 ","pages":"215-231"},"PeriodicalIF":0.0,"publicationDate":"2025-01-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10829586","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143105947","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Conditional Denoising Diffusion Probabilistic Models for Data Reconstruction Enhancement in Wireless Communications 无线通信中增强数据重构的条件去噪扩散概率模型
Pub Date : 2024-12-25 DOI: 10.1109/TMLCN.2024.3522872
Mehdi Letafati;Samad Ali;Matti Latva-Aho
In this paper, conditional denoising diffusion probabilistic models (CDiffs) are proposed to enhance the data transmission and reconstruction over wireless channels. The underlying mechanism of diffusion models is to decompose the data generation process over the so-called “denoising” steps. Inspired by this, the key idea is to leverage the generative prior of diffusion models in learning a “noisy-to-clean” transformation of the information signal to help enhance data reconstruction. The proposed scheme could be beneficial for communication scenarios in which a prior knowledge of the information content is available, e.g., in multimedia transmission. Hence, instead of employing complicated channel codes that reduce the information rate, one can exploit diffusion priors for reliable data reconstruction, especially under extreme channel conditions due to low signal-to-noise ratio (SNR), or hardware-impaired communications. The proposed CDiff-assisted receiver is tailored for the scenario of wireless image transmission using MNIST dataset. Our numerical results highlight the reconstruction performance of our scheme compared to the conventional digital communication, as well as the deep neural network (DNN)-based benchmark. It is also shown that more than 10 dB improvement in the reconstruction could be achieved in low SNR regimes, without the need to reduce the information rate for error correction.
本文提出了条件去噪扩散概率模型(CDiffs)来增强无线信道上的数据传输和重构。扩散模型的基本机制是将数据生成过程分解为所谓的“去噪”步骤。受此启发,关键思想是利用扩散模型的生成先验来学习信息信号的“噪声到清洁”转换,以帮助增强数据重建。所提出的方案可能有利于可获得信息内容的先验知识的通信场景,例如在多媒体传输中。因此,与其使用降低信息速率的复杂信道代码,不如利用扩散先验进行可靠的数据重建,特别是在由于低信噪比(SNR)或硬件受损通信而导致的极端信道条件下。提出的cdiff辅助接收机是针对使用MNIST数据集的无线图像传输场景量身定制的。与传统的数字通信以及基于深度神经网络(DNN)的基准相比,我们的数值结果突出了我们的方案的重建性能。研究还表明,在低信噪比的情况下,不需要降低纠错的信息率,就可以实现10 dB以上的重建改进。
{"title":"Conditional Denoising Diffusion Probabilistic Models for Data Reconstruction Enhancement in Wireless Communications","authors":"Mehdi Letafati;Samad Ali;Matti Latva-Aho","doi":"10.1109/TMLCN.2024.3522872","DOIUrl":"https://doi.org/10.1109/TMLCN.2024.3522872","url":null,"abstract":"In this paper, conditional denoising diffusion probabilistic models (CDiffs) are proposed to enhance the data transmission and reconstruction over wireless channels. The underlying mechanism of diffusion models is to decompose the data generation process over the so-called “denoising” steps. Inspired by this, the key idea is to leverage the generative prior of diffusion models in learning a “noisy-to-clean” transformation of the information signal to help enhance data reconstruction. The proposed scheme could be beneficial for communication scenarios in which a prior knowledge of the information content is available, e.g., in multimedia transmission. Hence, instead of employing complicated channel codes that reduce the information rate, one can exploit diffusion priors for reliable data reconstruction, especially under extreme channel conditions due to low signal-to-noise ratio (SNR), or hardware-impaired communications. The proposed CDiff-assisted receiver is tailored for the scenario of wireless image transmission using MNIST dataset. Our numerical results highlight the reconstruction performance of our scheme compared to the conventional digital communication, as well as the deep neural network (DNN)-based benchmark. It is also shown that more than 10 dB improvement in the reconstruction could be achieved in low SNR regimes, without the need to reduce the information rate for error correction.","PeriodicalId":100641,"journal":{"name":"IEEE Transactions on Machine Learning in Communications and Networking","volume":"3 ","pages":"133-146"},"PeriodicalIF":0.0,"publicationDate":"2024-12-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10816175","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142912491","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multi-Agent Reinforcement Learning With Action Masking for UAV-Enabled Mobile Communications 基于动作掩蔽的无人机移动通信多智能体强化学习
Pub Date : 2024-12-23 DOI: 10.1109/TMLCN.2024.3521876
Danish Rizvi;David Boyle
Unmanned Aerial Vehicles (UAVs) are increasingly used as aerial base stations to provide ad hoc communications infrastructure. Building upon prior research efforts which consider either static nodes, 2D trajectories or single UAV systems, this paper focuses on the use of multiple UAVs for providing wireless communication to mobile users in the absence of terrestrial communications infrastructure. In particular, we jointly optimize UAV 3D trajectory and NOMA power allocation to maximize system throughput. Firstly, a weighted K-means-based clustering algorithm establishes UAV-user associations at regular intervals. Then the efficacy of training a novel Shared Deep Q-Network (SDQN) with action masking is explored. Unlike training each UAV separately using DQN, the SDQN reduces training time by using the experiences of multiple UAVs instead of a single agent. We also show that SDQN can be used to train a multi-agent system with differing action spaces. Simulation results confirm that: 1) training a shared DQN outperforms a conventional DQN in terms of maximum system throughput (+20%) and training time (-10%); 2) it can converge for agents with different action spaces, yielding a 9% increase in throughput compared to Mutual DQN algorithm; and 3) combining NOMA with an SDQN architecture enables the network to achieve a better sum rate compared with existing baseline schemes.
无人驾驶飞行器(uav)越来越多地被用作空中基站,以提供自组织通信基础设施。在先前考虑静态节点、二维轨迹或单个无人机系统的研究成果的基础上,本文重点研究了在没有地面通信基础设施的情况下,使用多个无人机为移动用户提供无线通信。特别是,我们共同优化了无人机的3D轨迹和NOMA功率分配,以最大限度地提高系统吞吐量。首先,基于加权k均值的聚类算法以一定的间隔建立无人机用户关联。然后探讨了带动作掩蔽的新型共享深度q网络(SDQN)的训练效果。与使用DQN单独训练每架无人机不同,SDQN通过使用多架无人机的经验而不是单个代理来减少训练时间。我们还证明了SDQN可以用于训练具有不同动作空间的多智能体系统。仿真结果证实:1)在最大系统吞吐量(+20%)和训练时间(-10%)方面,训练共享DQN优于传统DQN;2)对于具有不同动作空间的智能体可以收敛,吞吐量比Mutual DQN算法提高9%;3)与现有基准方案相比,NOMA与SDQN架构的结合使网络能够获得更好的求和速率。
{"title":"Multi-Agent Reinforcement Learning With Action Masking for UAV-Enabled Mobile Communications","authors":"Danish Rizvi;David Boyle","doi":"10.1109/TMLCN.2024.3521876","DOIUrl":"https://doi.org/10.1109/TMLCN.2024.3521876","url":null,"abstract":"Unmanned Aerial Vehicles (UAVs) are increasingly used as aerial base stations to provide ad hoc communications infrastructure. Building upon prior research efforts which consider either static nodes, 2D trajectories or single UAV systems, this paper focuses on the use of multiple UAVs for providing wireless communication to mobile users in the absence of terrestrial communications infrastructure. In particular, we jointly optimize UAV 3D trajectory and NOMA power allocation to maximize system throughput. Firstly, a weighted K-means-based clustering algorithm establishes UAV-user associations at regular intervals. Then the efficacy of training a novel Shared Deep Q-Network (SDQN) with action masking is explored. Unlike training each UAV separately using DQN, the SDQN reduces training time by using the experiences of multiple UAVs instead of a single agent. We also show that SDQN can be used to train a multi-agent system with differing action spaces. Simulation results confirm that: 1) training a shared DQN outperforms a conventional DQN in terms of maximum system throughput (+20%) and training time (-10%); 2) it can converge for agents with different action spaces, yielding a 9% increase in throughput compared to Mutual DQN algorithm; and 3) combining NOMA with an SDQN architecture enables the network to achieve a better sum rate compared with existing baseline schemes.","PeriodicalId":100641,"journal":{"name":"IEEE Transactions on Machine Learning in Communications and Networking","volume":"3 ","pages":"117-132"},"PeriodicalIF":0.0,"publicationDate":"2024-12-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10812765","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142905826","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Online Learning for Intelligent Thermal Management of Interference-Coupled and Passively Cooled Base Stations 干扰耦合被动冷却基站智能热管理在线学习
Pub Date : 2024-12-16 DOI: 10.1109/TMLCN.2024.3517619
Zhanwei Yu;Yi Zhao;Xiaoli Chu;Di Yuan
Passively cooled base stations (PCBSs) have emerged to deliver better cost and energy efficiency. However, passive cooling necessitates intelligent thermal control via traffic management, i.e., the instantaneous data traffic or throughput of a PCBS directly impacts its thermal performance. This is particularly challenging for outdoor deployment of PCBSs because the heat dissipation efficiency is uncertain and fluctuates over time. What is more, the PCBSs are interference-coupled in multi-cell scenarios. Thus, a higher-throughput PCBS leads to higher interference to the other PCBSs, which, in turn, would require more resource consumption to meet their respective throughput targets. In this paper, we address online decision-making for maximizing the total downlink throughput for a multi-PCBS system subject to constraints related on operating temperature. We demonstrate that a reinforcement learning (RL) approach, specifically soft actor-critic (SAC), can successfully perform throughput maximization while keeping the PCBSs cool, by adapting the throughput to time-varying heat dissipation conditions. Furthermore, we design a denial and reward mechanism that effectively mitigates the risk of overheating during the exploration phase of RL. Simulation results show that our approach achieves up to 88.6% of the global optimum. This is very promising, as our approach operates without prior knowledge of future heat dissipation efficiency, which is required by the global optimum.
被动冷却基站(PCBSs)已经出现,以提供更好的成本和能源效率。然而,被动冷却需要通过流量管理进行智能热控制,即pcb的瞬时数据流量或吞吐量直接影响其热性能。这对于pcb的户外部署尤其具有挑战性,因为散热效率是不确定的,并且随着时间的推移而波动。更重要的是,pcb在多单元场景中是干扰耦合的。因此,更高吞吐量的pcb会导致对其他pcb的更高干扰,这反过来又需要更多的资源消耗来满足各自的吞吐量目标。在本文中,我们讨论了在线决策,以最大限度地提高受工作温度限制的多pcb系统的总下行吞吐量。我们证明了一种强化学习(RL)方法,特别是软行为者批评(SAC),可以通过使吞吐量适应时变的散热条件,在保持pcb冷却的同时成功地实现吞吐量最大化。此外,我们设计了一个拒绝和奖励机制,有效地降低了RL探索阶段过热的风险。仿真结果表明,该方法达到了全局最优解的88.6%。这是非常有希望的,因为我们的方法在没有全局最优所要求的未来散热效率的先验知识的情况下运行。
{"title":"Online Learning for Intelligent Thermal Management of Interference-Coupled and Passively Cooled Base Stations","authors":"Zhanwei Yu;Yi Zhao;Xiaoli Chu;Di Yuan","doi":"10.1109/TMLCN.2024.3517619","DOIUrl":"https://doi.org/10.1109/TMLCN.2024.3517619","url":null,"abstract":"Passively cooled base stations (PCBSs) have emerged to deliver better cost and energy efficiency. However, passive cooling necessitates intelligent thermal control via traffic management, i.e., the instantaneous data traffic or throughput of a PCBS directly impacts its thermal performance. This is particularly challenging for outdoor deployment of PCBSs because the heat dissipation efficiency is uncertain and fluctuates over time. What is more, the PCBSs are interference-coupled in multi-cell scenarios. Thus, a higher-throughput PCBS leads to higher interference to the other PCBSs, which, in turn, would require more resource consumption to meet their respective throughput targets. In this paper, we address online decision-making for maximizing the total downlink throughput for a multi-PCBS system subject to constraints related on operating temperature. We demonstrate that a reinforcement learning (RL) approach, specifically soft actor-critic (SAC), can successfully perform throughput maximization while keeping the PCBSs cool, by adapting the throughput to time-varying heat dissipation conditions. Furthermore, we design a denial and reward mechanism that effectively mitigates the risk of overheating during the exploration phase of RL. Simulation results show that our approach achieves up to 88.6% of the global optimum. This is very promising, as our approach operates without prior knowledge of future heat dissipation efficiency, which is required by the global optimum.","PeriodicalId":100641,"journal":{"name":"IEEE Transactions on Machine Learning in Communications and Networking","volume":"3 ","pages":"64-79"},"PeriodicalIF":0.0,"publicationDate":"2024-12-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10802970","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142858862","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Robust and Lightweight Modeling of IoT Network Behaviors From Raw Traffic Packets 基于原始流量数据包的物联网网络行为鲁棒轻量级建模
Pub Date : 2024-12-16 DOI: 10.1109/TMLCN.2024.3517613
Aleksandar Pasquini;Rajesh Vasa;Irini Logothetis;Hassan Habibi Gharakheili;Alexander Chambers;Minh Tran
Machine Learning (ML)-based techniques are increasingly used for network management tasks, such as intrusion detection, application identification, or asset management. Recent studies show that neural network-based traffic analysis can achieve performance comparable to human feature-engineered ML pipelines. However, neural networks provide this performance at a higher computational cost and complexity, due to high-throughput traffic conditions necessitating specialized hardware for real-time operations. This paper presents lightweight models for encoding characteristics of Internet-of-Things (IoT) network packets; 1) we present two strategies to encode packets (regardless of their size, encryption, and protocol) to integer vectors: a shallow lightweight neural network and compression. With a public dataset containing about 8 million packets emitted by 22 IoT device types, we show the encoded packets can form complete (up to 80%) and homogeneous (up to 89%) clusters; 2) we demonstrate the efficacy of our generated encodings in the downstream classification task and quantify their computing costs. We train three multi-class models to predict the IoT class given network packets and show our models can achieve the same levels of accuracy (94%) as deep neural network embeddings but with computing costs up to 10 times lower; 3) we examine how the amount of packet data (headers and payload) can affect the prediction quality. We demonstrate how the choice of Internet Protocol (IP) payloads strikes a balance between prediction accuracy (99%) and cost. Along with the cost-efficacy of models, this capability can result in rapid and accurate predictions, meeting the requirements of network operators.
基于机器学习(ML)的技术越来越多地用于网络管理任务,如入侵检测、应用程序识别或资产管理。最近的研究表明,基于神经网络的流量分析可以达到与人类特征工程ML管道相当的性能。然而,神经网络以更高的计算成本和复杂性提供这种性能,因为高吞吐量的流量条件需要专门的硬件来进行实时操作。本文提出了物联网(IoT)网络数据包编码特征的轻量级模型;1)我们提出了两种将数据包(无论其大小,加密和协议如何)编码为整数向量的策略:浅轻量级神经网络和压缩。使用包含22种物联网设备类型发出的约800万个数据包的公共数据集,我们显示编码的数据包可以形成完整(高达80%)和均匀(高达89%)的集群;2)我们证明了我们生成的编码在下游分类任务中的有效性,并量化了它们的计算成本。我们训练了三个多类模型来预测给定网络数据包的物联网类,并表明我们的模型可以达到与深度神经网络嵌入相同的精度水平(94%),但计算成本降低了10倍;3)我们检查数据包数据(报头和有效载荷)的数量如何影响预测质量。我们演示了互联网协议(IP)有效载荷的选择如何在预测精度(99%)和成本之间取得平衡。随着模型的成本效益,这种能力可以导致快速和准确的预测,满足网络运营商的要求。
{"title":"Robust and Lightweight Modeling of IoT Network Behaviors From Raw Traffic Packets","authors":"Aleksandar Pasquini;Rajesh Vasa;Irini Logothetis;Hassan Habibi Gharakheili;Alexander Chambers;Minh Tran","doi":"10.1109/TMLCN.2024.3517613","DOIUrl":"https://doi.org/10.1109/TMLCN.2024.3517613","url":null,"abstract":"Machine Learning (ML)-based techniques are increasingly used for network management tasks, such as intrusion detection, application identification, or asset management. Recent studies show that neural network-based traffic analysis can achieve performance comparable to human feature-engineered ML pipelines. However, neural networks provide this performance at a higher computational cost and complexity, due to high-throughput traffic conditions necessitating specialized hardware for real-time operations. This paper presents lightweight models for encoding characteristics of Internet-of-Things (IoT) network packets; 1) we present two strategies to encode packets (regardless of their size, encryption, and protocol) to integer vectors: a shallow lightweight neural network and compression. With a public dataset containing about 8 million packets emitted by 22 IoT device types, we show the encoded packets can form complete (up to 80%) and homogeneous (up to 89%) clusters; 2) we demonstrate the efficacy of our generated encodings in the downstream classification task and quantify their computing costs. We train three multi-class models to predict the IoT class given network packets and show our models can achieve the same levels of accuracy (94%) as deep neural network embeddings but with computing costs up to 10 times lower; 3) we examine how the amount of packet data (headers and payload) can affect the prediction quality. We demonstrate how the choice of Internet Protocol (IP) payloads strikes a balance between prediction accuracy (99%) and cost. Along with the cost-efficacy of models, this capability can result in rapid and accurate predictions, meeting the requirements of network operators.","PeriodicalId":100641,"journal":{"name":"IEEE Transactions on Machine Learning in Communications and Networking","volume":"3 ","pages":"98-116"},"PeriodicalIF":0.0,"publicationDate":"2024-12-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10802939","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142890343","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Self-Supervised Contrastive Learning for Joint Active and Passive Beamforming in RIS-Assisted MU-MIMO Systems ris辅助MU-MIMO系统联合主动和被动波束形成的自监督对比学习
Pub Date : 2024-12-11 DOI: 10.1109/TMLCN.2024.3515913
Zhizhou He;Fabien Héliot;Yi Ma
Reconfigurable Intelligent Surfaces (RIS) can enhance system performance at the cost of increased complexity in multi-user MIMO systems. The beamforming options scale with the number of antennas at the base station/RIS. Existing methods for solving this problem tend to use computationally intensive iterative methods that are non-scalable for large RIS-aided MIMO systems. We propose here a novel self-supervised contrastive learning neural network (NN) architecture to optimize the sum spectral efficiency through joint active and passive beamforming design in multi-user RIS-aided MIMO systems. Our scheme utilizes contrastive learning to capture the channel features from augmented channel data and then can be trained to perform beamforming with only 1% of labeled data. The labels are derived through a closed-form optimization algorithm, leveraging a sequential fractional programming approach. Leveraging the proposed self-supervised design helps to greatly reduce the computational complexity during the training phase. Moreover, our proposed model can operate under various noise levels by using data augmentation methods while maintaining a robust out-of-distribution performance under various propagation environments and different signal-to-noise ratios (SNR)s. During training, our proposed network only needs 10% of labeled data to converge when compared to supervised learning. Our trained NN can then achieve performance which is only $~7%$ and $~2.5%$ away from mathematical upper bound and fully supervised learning, respectively, with far less computational complexity.
在多用户MIMO系统中,可重构智能表面(RIS)可以以增加复杂性为代价来提高系统性能。波束形成选项与基站/RIS的天线数量有关。解决这一问题的现有方法倾向于使用计算密集型的迭代方法,这些方法对于大型ris辅助MIMO系统来说是不可扩展的。本文提出了一种新的自监督对比学习神经网络(NN)架构,通过联合主动和被动波束形成设计来优化多用户ris辅助MIMO系统的总频谱效率。我们的方案利用对比学习从增强的信道数据中捕获信道特征,然后可以训练仅使用1%的标记数据执行波束形成。标签是通过一个封闭形式的优化算法派生的,利用顺序分数规划方法。利用所提出的自监督设计有助于大大降低训练阶段的计算复杂度。此外,我们提出的模型可以使用数据增强方法在各种噪声水平下运行,同时在各种传播环境和不同信噪比(SNR)下保持鲁棒的分布外性能。在训练过程中,与监督学习相比,我们提出的网络只需要10%的标记数据就可以收敛。然后,我们训练的神经网络可以分别实现距离数学上界和完全监督学习仅7%和2.5%的性能,并且计算复杂度要低得多。
{"title":"Self-Supervised Contrastive Learning for Joint Active and Passive Beamforming in RIS-Assisted MU-MIMO Systems","authors":"Zhizhou He;Fabien Héliot;Yi Ma","doi":"10.1109/TMLCN.2024.3515913","DOIUrl":"https://doi.org/10.1109/TMLCN.2024.3515913","url":null,"abstract":"Reconfigurable Intelligent Surfaces (RIS) can enhance system performance at the cost of increased complexity in multi-user MIMO systems. The beamforming options scale with the number of antennas at the base station/RIS. Existing methods for solving this problem tend to use computationally intensive iterative methods that are non-scalable for large RIS-aided MIMO systems. We propose here a novel self-supervised contrastive learning neural network (NN) architecture to optimize the sum spectral efficiency through joint active and passive beamforming design in multi-user RIS-aided MIMO systems. Our scheme utilizes contrastive learning to capture the channel features from augmented channel data and then can be trained to perform beamforming with only 1% of labeled data. The labels are derived through a closed-form optimization algorithm, leveraging a sequential fractional programming approach. Leveraging the proposed self-supervised design helps to greatly reduce the computational complexity during the training phase. Moreover, our proposed model can operate under various noise levels by using data augmentation methods while maintaining a robust out-of-distribution performance under various propagation environments and different signal-to-noise ratios (SNR)s. During training, our proposed network only needs 10% of labeled data to converge when compared to supervised learning. Our trained NN can then achieve performance which is only \u0000<inline-formula> <tex-math>$~7%$ </tex-math></inline-formula>\u0000 and \u0000<inline-formula> <tex-math>$~2.5%$ </tex-math></inline-formula>\u0000 away from mathematical upper bound and fully supervised learning, respectively, with far less computational complexity.","PeriodicalId":100641,"journal":{"name":"IEEE Transactions on Machine Learning in Communications and Networking","volume":"3 ","pages":"147-162"},"PeriodicalIF":0.0,"publicationDate":"2024-12-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10793234","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142912511","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
IEEE Communications Society Board of Governors IEEE通信协会理事会
Pub Date : 2024-12-11 DOI: 10.1109/TMLCN.2024.3500756
{"title":"IEEE Communications Society Board of Governors","authors":"","doi":"10.1109/TMLCN.2024.3500756","DOIUrl":"https://doi.org/10.1109/TMLCN.2024.3500756","url":null,"abstract":"","PeriodicalId":100641,"journal":{"name":"IEEE Transactions on Machine Learning in Communications and Networking","volume":"3 ","pages":"C3-C3"},"PeriodicalIF":0.0,"publicationDate":"2024-12-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10792973","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142810507","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Deep Receiver Architectures for Robust MIMO Rate Splitting Multiple Access 鲁棒MIMO分频多址的深度接收机架构
Pub Date : 2024-12-09 DOI: 10.1109/TMLCN.2024.3513267
Dheeraj Raja Kumar;Carles Antón-Haro;Xavier Mestre
Machine Learning tools are becoming very powerful alternatives to improve the robustness of wireless communication systems. Signal processing procedures that tend to collapse in the presence of model mismatches can be effectively improved and made robust by incorporating the selective use of data-driven techniques. This paper explores the use of neural network (NN)-based receivers to improve the reception of a Rate Splitting Multiple Access (RSMA) system. The intention is to explore several alternatives to conventional successive interference cancellation (SIC) techniques, which are known to be ineffective in the presence of channel state information (CSI) and model errors. The focus is on NN-based architectures that do not need to be retrained at each channel realization. The main idea is to replace some of the basic operations in a conventional multi-antenna SIC receiver by their NN-based equivalents, following a hybrid Model/Data-driven based approach that preserves the main procedures in the model-based signal demodulation chain. Three different architectures are explored along with their performance and computational complexity, characterized under different degrees of model uncertainty, including imperfect channel state information and non-linear channels. We evaluate the performance of data-driven architectures in overloaded scenario to analyze its effectiveness against conventional benchmarks. The study dictates that a higher degree of robustness of transceiver can be achieved, provided the neural architecture is well-designed and fed with the right information.
机器学习工具正在成为提高无线通信系统健壮性的非常强大的替代方案。在模型不匹配的情况下,信号处理程序往往会崩溃,通过结合选择性使用数据驱动技术,可以有效地改进和增强信号处理程序的鲁棒性。本文探讨了使用基于神经网络(NN)的接收器来改善速率分割多址(RSMA)系统的接收。目的是探索几种替代传统连续干扰消除(SIC)技术的方法,这些技术在存在信道状态信息(CSI)和模型误差时是无效的。重点是基于神经网络的架构,不需要在每个通道实现时重新训练。其主要思想是将传统多天线SIC接收器中的一些基本操作替换为基于神经网络的等效操作,遵循基于模型/数据驱动的混合方法,保留基于模型的信号解调链中的主要程序。研究了三种不同的体系结构及其性能和计算复杂度,这些体系结构具有不同程度的模型不确定性,包括不完全信道状态信息和非线性信道。我们评估了数据驱动架构在过载场景下的性能,以分析其与传统基准测试的有效性。研究表明,如果神经网络结构设计良好,并提供正确的信息,则可以实现更高程度的收发器鲁棒性。
{"title":"Deep Receiver Architectures for Robust MIMO Rate Splitting Multiple Access","authors":"Dheeraj Raja Kumar;Carles Antón-Haro;Xavier Mestre","doi":"10.1109/TMLCN.2024.3513267","DOIUrl":"https://doi.org/10.1109/TMLCN.2024.3513267","url":null,"abstract":"Machine Learning tools are becoming very powerful alternatives to improve the robustness of wireless communication systems. Signal processing procedures that tend to collapse in the presence of model mismatches can be effectively improved and made robust by incorporating the selective use of data-driven techniques. This paper explores the use of neural network (NN)-based receivers to improve the reception of a Rate Splitting Multiple Access (RSMA) system. The intention is to explore several alternatives to conventional successive interference cancellation (SIC) techniques, which are known to be ineffective in the presence of channel state information (CSI) and model errors. The focus is on NN-based architectures that do not need to be retrained at each channel realization. The main idea is to replace some of the basic operations in a conventional multi-antenna SIC receiver by their NN-based equivalents, following a hybrid Model/Data-driven based approach that preserves the main procedures in the model-based signal demodulation chain. Three different architectures are explored along with their performance and computational complexity, characterized under different degrees of model uncertainty, including imperfect channel state information and non-linear channels. We evaluate the performance of data-driven architectures in overloaded scenario to analyze its effectiveness against conventional benchmarks. The study dictates that a higher degree of robustness of transceiver can be achieved, provided the neural architecture is well-designed and fed with the right information.","PeriodicalId":100641,"journal":{"name":"IEEE Transactions on Machine Learning in Communications and Networking","volume":"3 ","pages":"45-63"},"PeriodicalIF":0.0,"publicationDate":"2024-12-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10781451","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142844397","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Toward Understanding Federated Learning over Unreliable Networks 迈向理解不可靠网络上的联邦学习
Pub Date : 2024-12-04 DOI: 10.1109/TMLCN.2024.3511475
Chenyuan Feng;Ahmed Arafa;Zihan Chen;Mingxiong Zhao;Tony Q. S. Quek;Howard H. Yang
This paper studies the efficiency of training a statistical model among an edge server and multiple clients via Federated Learning (FL) – a machine learning method that preserves data privacy in the training process – over wireless networks. Due to unreliable wireless channels and constrained communication resources, the server can only choose a handful of clients for parameter updates during each communication round. To address this issue, analytical expressions are derived to characterize the FL convergence rate, accounting for key features from both communication and algorithmic aspects, including transmission reliability, scheduling policies, and momentum method. First, the analysis reveals that either delicately designed user scheduling policies or expanding higher bandwidth to accommodate more clients in each communication round can expedite model training in networks with reliable connections. However, these methods become ineffective when the connection is erratic. Second, it has been verified that incorporating the momentum method into the model training algorithm accelerates the rate of convergence and provides greater resilience against transmission failures. Last, extensive empirical simulations are provided to verify these theoretical discoveries and enhancements in performance.
本文研究了在无线网络上,通过联邦学习(FL)——一种在训练过程中保护数据隐私的机器学习方法——在边缘服务器和多个客户端之间训练统计模型的效率。由于无线信道不可靠和通信资源受限,服务器在每一轮通信中只能选择少数几个客户端进行参数更新。为了解决这一问题,推导了表征FL收敛速率的解析表达式,考虑了通信和算法方面的关键特征,包括传输可靠性、调度策略和动量方法。首先,分析表明,无论是精心设计用户调度策略,还是在每一轮通信中扩展更高的带宽以容纳更多的客户端,都可以加速具有可靠连接的网络中的模型训练。但是,当连接不稳定时,这些方法就失效了。其次,已经验证了将动量方法纳入模型训练算法可以加快收敛速度,并对传输故障提供更大的弹性。最后,提供了广泛的经验模拟来验证这些理论发现和性能的增强。
{"title":"Toward Understanding Federated Learning over Unreliable Networks","authors":"Chenyuan Feng;Ahmed Arafa;Zihan Chen;Mingxiong Zhao;Tony Q. S. Quek;Howard H. Yang","doi":"10.1109/TMLCN.2024.3511475","DOIUrl":"https://doi.org/10.1109/TMLCN.2024.3511475","url":null,"abstract":"This paper studies the efficiency of training a statistical model among an edge server and multiple clients via Federated Learning (FL) – a machine learning method that preserves data privacy in the training process – over wireless networks. Due to unreliable wireless channels and constrained communication resources, the server can only choose a handful of clients for parameter updates during each communication round. To address this issue, analytical expressions are derived to characterize the FL convergence rate, accounting for key features from both communication and algorithmic aspects, including transmission reliability, scheduling policies, and momentum method. First, the analysis reveals that either delicately designed user scheduling policies or expanding higher bandwidth to accommodate more clients in each communication round can expedite model training in networks with reliable connections. However, these methods become ineffective when the connection is erratic. Second, it has been verified that incorporating the momentum method into the model training algorithm accelerates the rate of convergence and provides greater resilience against transmission failures. Last, extensive empirical simulations are provided to verify these theoretical discoveries and enhancements in performance.","PeriodicalId":100641,"journal":{"name":"IEEE Transactions on Machine Learning in Communications and Networking","volume":"3 ","pages":"80-97"},"PeriodicalIF":0.0,"publicationDate":"2024-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10777576","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142880294","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
IEEE Transactions on Machine Learning in Communications and Networking
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1