首页 > 最新文献

IEEE Transactions on Machine Learning in Communications and Networking最新文献

英文 中文
Deep Fusion Intelligence: Enhancing 5G Security Against Over-the-Air Attacks 深度融合智能:增强5G安全防范空中攻击
Pub Date : 2025-01-23 DOI: 10.1109/TMLCN.2025.3533427
Mohammadreza Amini;Ghazal Asemian;Burak Kantarci;Cliff Ellement;Melike Erol-Kantarci
With the increasing deployment of 5G networks, the vulnerability to malicious interference, such as jamming attacks, has become a significant concern. Detecting such attacks is crucial to ensuring the reliability and security of 5G communication systems Specifically in CAVs. This paper proposes a robust jamming detection system addressing challenges posed by impairments, such as Carrier Frequency Offset (CFO) and channel effects. To improve overall detection performance, the proposed approach leverages deep ensemble learning techniques by fusing different features with different sensitivities from the RF domain and Physical layer namely, Primary Synchronization Signal (PSS) and Secondary Synchronization Signal (SSS) cross-correlations in the time and the frequency domain, the energy of the null subcarriers, and the PBCH Error Vector Magnitude (EVM). The ensemble module is optimized for the aggregation method and different learning parameters. Furthermore, to mitigate the false positive and false negative, a systematic approach, termed Temporal Epistemic Decision Aggregator (TEDA) is introduced, which elegantly navigates the time-accuracy tradeoff by seamlessly integrating temporal decisions, thereby enhancing decision reliability. The presented approach is also capable of detecting inter-cell/inter-sector interference, thereby enhancing situational awareness on 5G air interface and RF domain security. Results show that the presented approach achieves the Area Under Curve (AUC) of 0.98, outperforming other compared methods by at least 0.06 (a 6% improvement). The true positive and negative rates are reported as 93.5% and 91.9%, respectively, showcasing strong performance for scenarios with CFO and channel impairments and outperforming the other compared methods by at least 12%. An optimization problem is formulated and solved based on the level of uncertainty observed in the experimental set-up and the optimum TEDA configuration is derived for the target false-alarm and miss-detection probability. Ultimately, the performance of the entire architecture is confirmed through analysis of real 5G signals acquired from a practical testbed, showing strong agreement with the simulation results.
随着5G网络部署的不断增加,容易受到恶意干扰,如干扰攻击,已经成为一个重要的问题。检测此类攻击对于确保5G通信系统的可靠性和安全性至关重要,特别是在自动驾驶汽车中。本文提出了一种鲁棒的干扰检测系统,解决了载波频率偏移(CFO)和信道效应等损伤带来的挑战。为了提高整体检测性能,该方法利用深度集成学习技术,融合来自射频域和物理层的不同灵敏度特征,即主同步信号(PSS)和次同步信号(SSS)在时间和频域的相互关系、零子载波的能量和PBCH误差矢量幅度(EVM)。针对不同的学习参数和聚合方法,对集成模块进行了优化。此外,为了减少假阳性和假阴性,引入了一种称为时间认知决策聚合器(TEDA)的系统方法,该方法通过无缝集成时间决策来优雅地导航时间-精度权衡,从而提高决策可靠性。所提出的方法还能够检测小区间/扇区间干扰,从而增强5G空中接口和射频域安全的态势感知。结果表明,该方法的曲线下面积(AUC)为0.98,比其他比较方法至少提高0.06(提高6%)。报告的真实正负率分别为93.5%和91.9%,在CFO和渠道受损的情况下表现强劲,比其他比较方法的表现至少高出12%。根据实验装置观察到的不确定度,建立并求解了优化问题,导出了目标虚警和漏检概率的最优TEDA配置。最后,通过对实际试验台采集的真实5G信号进行分析,验证了整个架构的性能,与仿真结果吻合较好。
{"title":"Deep Fusion Intelligence: Enhancing 5G Security Against Over-the-Air Attacks","authors":"Mohammadreza Amini;Ghazal Asemian;Burak Kantarci;Cliff Ellement;Melike Erol-Kantarci","doi":"10.1109/TMLCN.2025.3533427","DOIUrl":"https://doi.org/10.1109/TMLCN.2025.3533427","url":null,"abstract":"With the increasing deployment of 5G networks, the vulnerability to malicious interference, such as jamming attacks, has become a significant concern. Detecting such attacks is crucial to ensuring the reliability and security of 5G communication systems Specifically in CAVs. This paper proposes a robust jamming detection system addressing challenges posed by impairments, such as Carrier Frequency Offset (CFO) and channel effects. To improve overall detection performance, the proposed approach leverages deep ensemble learning techniques by fusing different features with different sensitivities from the RF domain and Physical layer namely, Primary Synchronization Signal (PSS) and Secondary Synchronization Signal (SSS) cross-correlations in the time and the frequency domain, the energy of the null subcarriers, and the PBCH Error Vector Magnitude (EVM). The ensemble module is optimized for the aggregation method and different learning parameters. Furthermore, to mitigate the false positive and false negative, a systematic approach, termed Temporal Epistemic Decision Aggregator (TEDA) is introduced, which elegantly navigates the time-accuracy tradeoff by seamlessly integrating temporal decisions, thereby enhancing decision reliability. The presented approach is also capable of detecting inter-cell/inter-sector interference, thereby enhancing situational awareness on 5G air interface and RF domain security. Results show that the presented approach achieves the Area Under Curve (AUC) of 0.98, outperforming other compared methods by at least 0.06 (a 6% improvement). The true positive and negative rates are reported as 93.5% and 91.9%, respectively, showcasing strong performance for scenarios with CFO and channel impairments and outperforming the other compared methods by at least 12%. An optimization problem is formulated and solved based on the level of uncertainty observed in the experimental set-up and the optimum TEDA configuration is derived for the target false-alarm and miss-detection probability. Ultimately, the performance of the entire architecture is confirmed through analysis of real 5G signals acquired from a practical testbed, showing strong agreement with the simulation results.","PeriodicalId":100641,"journal":{"name":"IEEE Transactions on Machine Learning in Communications and Networking","volume":"3 ","pages":"263-279"},"PeriodicalIF":0.0,"publicationDate":"2025-01-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10851353","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143105946","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Semantic Importance-Aware Communications With Semantic Correction Using Large Language Models 使用大型语言模型进行语义校正的语义重要性感知通信
Pub Date : 2025-01-16 DOI: 10.1109/TMLCN.2025.3530875
Shuaishuai Guo;Yanhu Wang;Jia Ye;Anbang Zhang;Peng Zhang;Kun Xu
Semantic communications, a promising approach for agent-human and agent-agent interactions, typically operate at a feature level, lacking true semantic understanding. This paper explores understanding-level semantic communications (ULSC), transforming visual data into human-intelligible semantic content. We employ an image caption neural network (ICNN) to derive semantic representations from visual data, expressed as natural language descriptions. These are further refined using a pre-trained large language model (LLM) for importance quantification and semantic error correction. The subsequent semantic importance-aware communications (SIAC) aim to minimize semantic loss while respecting transmission delay constraints, exemplified through adaptive modulation and coding strategies. At the receiving end, LLM-based semantic error correction is utilized. If visual data recreation is desired, a pre-trained generative artificial intelligence (AI) model can regenerate it using the corrected descriptions. We assess semantic similarities between transmitted and recovered content, demonstrating ULSC’s superior ability to convey semantic understanding compared to feature-level semantic communications (FLSC). ULSC’s conversion of visual data to natural language facilitates various cognitive tasks, leveraging human knowledge bases. Additionally, this method enhances privacy, as neither original data nor features are directly transmitted.
语义通信是agent-human和agent-agent交互的一种很有前途的方法,通常在特征级别上操作,缺乏真正的语义理解。本文探讨了理解级语义通信(ULSC),将视觉数据转换为人类可理解的语义内容。我们使用图像标题神经网络(ICNN)从视觉数据中获得语义表示,表示为自然语言描述。使用预训练的大型语言模型(LLM)对重要性量化和语义错误纠正进行进一步细化。随后的语义重要性感知通信(SIAC)旨在最大限度地减少语义损失,同时尊重传输延迟约束,例如通过自适应调制和编码策略。接收端采用基于llm的语义纠错。如果需要视觉数据再现,一个预先训练的生成式人工智能(AI)模型可以使用正确的描述重新生成数据。我们评估了传输和恢复内容之间的语义相似性,证明了与特征级语义通信(FLSC)相比,ULSC在传递语义理解方面的卓越能力。ULSC将视觉数据转换为自然语言,促进了各种认知任务,利用了人类知识库。此外,由于不直接传输原始数据和特征,这种方法增强了隐私性。
{"title":"Semantic Importance-Aware Communications With Semantic Correction Using Large Language Models","authors":"Shuaishuai Guo;Yanhu Wang;Jia Ye;Anbang Zhang;Peng Zhang;Kun Xu","doi":"10.1109/TMLCN.2025.3530875","DOIUrl":"https://doi.org/10.1109/TMLCN.2025.3530875","url":null,"abstract":"Semantic communications, a promising approach for agent-human and agent-agent interactions, typically operate at a feature level, lacking true semantic understanding. This paper explores understanding-level semantic communications (ULSC), transforming visual data into human-intelligible semantic content. We employ an image caption neural network (ICNN) to derive semantic representations from visual data, expressed as natural language descriptions. These are further refined using a pre-trained large language model (LLM) for importance quantification and semantic error correction. The subsequent semantic importance-aware communications (SIAC) aim to minimize semantic loss while respecting transmission delay constraints, exemplified through adaptive modulation and coding strategies. At the receiving end, LLM-based semantic error correction is utilized. If visual data recreation is desired, a pre-trained generative artificial intelligence (AI) model can regenerate it using the corrected descriptions. We assess semantic similarities between transmitted and recovered content, demonstrating ULSC’s superior ability to convey semantic understanding compared to feature-level semantic communications (FLSC). ULSC’s conversion of visual data to natural language facilitates various cognitive tasks, leveraging human knowledge bases. Additionally, this method enhances privacy, as neither original data nor features are directly transmitted.","PeriodicalId":100641,"journal":{"name":"IEEE Transactions on Machine Learning in Communications and Networking","volume":"3 ","pages":"232-245"},"PeriodicalIF":0.0,"publicationDate":"2025-01-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10843783","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143105945","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Convergence-Privacy-Fairness Trade-Off in Personalized Federated Learning 个性化联邦学习中的收敛-隐私-公平权衡
Pub Date : 2025-01-13 DOI: 10.1109/TMLCN.2025.3528901
Xiyu Zhao;Qimei Cui;Weicai Li;Wei Ni;Ekram Hossain;Quan Z. Sheng;Xiaofeng Tao;Ping Zhang
Personalized federated learning (PFL), e.g., the renowned Ditto, strikes a balance between personalization and generalization by conducting federated learning (FL) to guide personalized learning (PL). While FL is unaffected by personalized model training, in Ditto, PL depends on the outcome of the FL. However, the clients’ concern about their privacy and consequent perturbation of their local models can affect the convergence and (performance) fairness of PL. This paper presents PFL, called DP-Ditto, which is a non-trivial extension of Ditto under the protection of differential privacy (DP), and analyzes the trade-off among its privacy guarantee, model convergence, and performance distribution fairness. We also analyze the convergence upper bound of the personalized models under DP-Ditto and derive the optimal number of global aggregations given a privacy budget. Further, we analyze the performance fairness of the personalized models, and reveal the feasibility of optimizing DP-Ditto jointly for convergence and fairness. Experiments validate our analysis and demonstrate that DP-Ditto can surpass the DP-perturbed versions of the state-of-the-art PFL models, such as FedAMP, pFedMe, APPLE, and FedALA, by over 32.71% in fairness and 9.66% in accuracy.
个性化联邦学习(PFL),例如著名的Ditto,通过进行联邦学习(FL)来指导个性化学习(PL),在个性化和泛化之间取得了平衡。虽然FL不受个性化模型训练的影响,但在Ditto中,PL取决于FL的结果。然而,客户对其隐私的关注及其对其局部模型的扰动会影响PL的收敛性和(性能)公平性。本文提出了PFL,称为DP-Ditto,它是Ditto在差分隐私(DP)保护下的非平凡扩展,并分析了其隐私保障,模型收敛,以及绩效分配的公平性。我们还分析了个性化模型在DP-Ditto下的收敛上界,并推导出给定隐私预算的最优全局聚合数。进一步分析了个性化模型的性能公平性,揭示了共同优化DP-Ditto的收敛性和公平性的可行性。实验验证了我们的分析,并证明DP-Ditto可以超过最先进的dp -扰动版本的PFL模型,如FedAMP, pFedMe, APPLE和FedALA,公平性超过32.71%,准确性超过9.66%。
{"title":"Convergence-Privacy-Fairness Trade-Off in Personalized Federated Learning","authors":"Xiyu Zhao;Qimei Cui;Weicai Li;Wei Ni;Ekram Hossain;Quan Z. Sheng;Xiaofeng Tao;Ping Zhang","doi":"10.1109/TMLCN.2025.3528901","DOIUrl":"https://doi.org/10.1109/TMLCN.2025.3528901","url":null,"abstract":"Personalized federated learning (PFL), e.g., the renowned Ditto, strikes a balance between personalization and generalization by conducting federated learning (FL) to guide personalized learning (PL). While FL is unaffected by personalized model training, in Ditto, PL depends on the outcome of the FL. However, the clients’ concern about their privacy and consequent perturbation of their local models can affect the convergence and (performance) fairness of PL. This paper presents PFL, called DP-Ditto, which is a non-trivial extension of Ditto under the protection of differential privacy (DP), and analyzes the trade-off among its privacy guarantee, model convergence, and performance distribution fairness. We also analyze the convergence upper bound of the personalized models under DP-Ditto and derive the optimal number of global aggregations given a privacy budget. Further, we analyze the performance fairness of the personalized models, and reveal the feasibility of optimizing DP-Ditto jointly for convergence and fairness. Experiments validate our analysis and demonstrate that DP-Ditto can surpass the DP-perturbed versions of the state-of-the-art PFL models, such as FedAMP, pFedMe, APPLE, and FedALA, by over 32.71% in fairness and 9.66% in accuracy.","PeriodicalId":100641,"journal":{"name":"IEEE Transactions on Machine Learning in Communications and Networking","volume":"3 ","pages":"246-262"},"PeriodicalIF":0.0,"publicationDate":"2025-01-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10838599","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143105948","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Asynchronous Real-Time Federated Learning for Anomaly Detection in Microservice Cloud Applications 微服务云应用中异步实时联邦学习的异常检测
Pub Date : 2025-01-09 DOI: 10.1109/TMLCN.2025.3527919
Mahsa Raeiszadeh;Amin Ebrahimzadeh;Roch H. Glitho;Johan Eker;Raquel A. F. Mini
The complexity and dynamicity of microservice architectures in cloud environments present substantial challenges to the reliability and availability of the services built on these architectures. Therefore, effective anomaly detection is crucial to prevent impending failures and resolve them promptly. Distributed data analysis techniques based on machine learning (ML) have recently gained attention in detecting anomalies in microservice systems. ML-based anomaly detection techniques mostly require centralized data collection and processing, which may raise scalability and computational issues in practice. In this paper, we propose an Asynchronous Real-Time Federated Learning (ART-FL) approach for anomaly detection in cloud-based microservice systems. In our approach, edge clients perform real-time learning with continuous streaming local data. At the edge clients, we model intra-service behaviors and inter-service dependencies in multi-source distributed data based on a Span Causal Graph (SCG) representation and train a model through a combination of Graph Neural Network (GNN) and Positive and Unlabeled (PU) learning. Our FL approach updates the global model in an asynchronous manner to achieve accurate and efficient anomaly detection, addressing computational overhead across diverse edge clients, including those that experience delays. Our trace-driven evaluations indicate that the proposed method outperforms the state-of-the-art anomaly detection methods by 4% in terms of $F_{1}$ -score while meeting the given time efficiency and scalability requirements.
云环境中微服务架构的复杂性和动态性对构建在这些架构上的服务的可靠性和可用性提出了重大挑战。因此,有效的异常检测对于预防即将发生的故障并及时解决至关重要。基于机器学习(ML)的分布式数据分析技术最近在微服务系统异常检测方面得到了广泛关注。基于机器学习的异常检测技术大多需要集中的数据收集和处理,这在实践中可能会带来可扩展性和计算问题。在本文中,我们提出了一种异步实时联邦学习(ART-FL)方法,用于基于云的微服务系统中的异常检测。在我们的方法中,边缘客户端使用连续的本地流数据执行实时学习。在边缘客户端,我们基于跨因果图(SCG)表示对多源分布式数据中的服务内行为和服务间依赖进行建模,并通过图神经网络(GNN)和正未标记(PU)学习的组合训练模型。我们的FL方法以异步方式更新全局模型,以实现准确高效的异常检测,解决不同边缘客户端的计算开销,包括那些经历延迟的客户端。我们的跟踪驱动评估表明,在满足给定的时间效率和可扩展性要求的情况下,所提出的方法在F_ bb_0 $ -score方面比最先进的异常检测方法高出4%。
{"title":"Asynchronous Real-Time Federated Learning for Anomaly Detection in Microservice Cloud Applications","authors":"Mahsa Raeiszadeh;Amin Ebrahimzadeh;Roch H. Glitho;Johan Eker;Raquel A. F. Mini","doi":"10.1109/TMLCN.2025.3527919","DOIUrl":"https://doi.org/10.1109/TMLCN.2025.3527919","url":null,"abstract":"The complexity and dynamicity of microservice architectures in cloud environments present substantial challenges to the reliability and availability of the services built on these architectures. Therefore, effective anomaly detection is crucial to prevent impending failures and resolve them promptly. Distributed data analysis techniques based on machine learning (ML) have recently gained attention in detecting anomalies in microservice systems. ML-based anomaly detection techniques mostly require centralized data collection and processing, which may raise scalability and computational issues in practice. In this paper, we propose an Asynchronous Real-Time Federated Learning (ART-FL) approach for anomaly detection in cloud-based microservice systems. In our approach, edge clients perform real-time learning with continuous streaming local data. At the edge clients, we model intra-service behaviors and inter-service dependencies in multi-source distributed data based on a Span Causal Graph (SCG) representation and train a model through a combination of Graph Neural Network (GNN) and Positive and Unlabeled (PU) learning. Our FL approach updates the global model in an asynchronous manner to achieve accurate and efficient anomaly detection, addressing computational overhead across diverse edge clients, including those that experience delays. Our trace-driven evaluations indicate that the proposed method outperforms the state-of-the-art anomaly detection methods by 4% in terms of <inline-formula> <tex-math>$F_{1}$ </tex-math></inline-formula>-score while meeting the given time efficiency and scalability requirements.","PeriodicalId":100641,"journal":{"name":"IEEE Transactions on Machine Learning in Communications and Networking","volume":"3 ","pages":"176-194"},"PeriodicalIF":0.0,"publicationDate":"2025-01-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10835399","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142993443","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Private Collaborative Edge Inference via Over-the-Air Computation 基于无线计算的私有协同边缘推断
Pub Date : 2025-01-06 DOI: 10.1109/TMLCN.2025.3526551
Selim F. Yilmaz;Burak Hasircioğlu;Li Qiao;Denız Gündüz
We consider collaborative inference at the wireless edge, where each client’s model is trained independently on its local dataset. Clients are queried in parallel to make an accurate decision collaboratively. In addition to maximizing the inference accuracy, we also want to ensure the privacy of local models. To this end, we leverage the superposition property of the multiple access channel to implement bandwidth-efficient multi-user inference methods. We propose different methods for ensemble and multi-view classification that exploit over-the-air computation (OAC). We show that these schemes perform better than their orthogonal counterparts with statistically significant differences while using fewer resources and providing privacy guarantees. We also provide experimental results verifying the benefits of the proposed OAC approach to multi-user inference, and perform an ablation study to demonstrate the effectiveness of our design choices. We share the source code of the framework publicly on Github to facilitate further research and reproducibility.
我们考虑无线边缘的协作推理,其中每个客户端的模型在其本地数据集上独立训练。并行查询客户端,以便协同做出准确的决策。除了最大化推理精度外,我们还希望确保局部模型的隐私性。为此,我们利用多址信道的叠加特性来实现带宽高效的多用户推理方法。我们提出了不同的集成和多视图分类方法,利用空中计算(OAC)。我们证明了这些方案在使用更少的资源和提供隐私保证的同时,比它们的正交方案表现得更好,具有统计学上显著的差异。我们还提供了实验结果,验证了所提出的OAC方法对多用户推理的好处,并进行了消融研究,以证明我们的设计选择的有效性。我们在Github上公开共享框架的源代码,以促进进一步的研究和可重复性。
{"title":"Private Collaborative Edge Inference via Over-the-Air Computation","authors":"Selim F. Yilmaz;Burak Hasircioğlu;Li Qiao;Denız Gündüz","doi":"10.1109/TMLCN.2025.3526551","DOIUrl":"https://doi.org/10.1109/TMLCN.2025.3526551","url":null,"abstract":"We consider collaborative inference at the wireless edge, where each client’s model is trained independently on its local dataset. Clients are queried in parallel to make an accurate decision collaboratively. In addition to maximizing the inference accuracy, we also want to ensure the privacy of local models. To this end, we leverage the superposition property of the multiple access channel to implement bandwidth-efficient multi-user inference methods. We propose different methods for ensemble and multi-view classification that exploit over-the-air computation (OAC). We show that these schemes perform better than their orthogonal counterparts with statistically significant differences while using fewer resources and providing privacy guarantees. We also provide experimental results verifying the benefits of the proposed OAC approach to multi-user inference, and perform an ablation study to demonstrate the effectiveness of our design choices. We share the source code of the framework publicly on Github to facilitate further research and reproducibility.","PeriodicalId":100641,"journal":{"name":"IEEE Transactions on Machine Learning in Communications and Networking","volume":"3 ","pages":"215-231"},"PeriodicalIF":0.0,"publicationDate":"2025-01-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10829586","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143105947","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Conditional Denoising Diffusion Probabilistic Models for Data Reconstruction Enhancement in Wireless Communications 无线通信中增强数据重构的条件去噪扩散概率模型
Pub Date : 2024-12-25 DOI: 10.1109/TMLCN.2024.3522872
Mehdi Letafati;Samad Ali;Matti Latva-Aho
In this paper, conditional denoising diffusion probabilistic models (CDiffs) are proposed to enhance the data transmission and reconstruction over wireless channels. The underlying mechanism of diffusion models is to decompose the data generation process over the so-called “denoising” steps. Inspired by this, the key idea is to leverage the generative prior of diffusion models in learning a “noisy-to-clean” transformation of the information signal to help enhance data reconstruction. The proposed scheme could be beneficial for communication scenarios in which a prior knowledge of the information content is available, e.g., in multimedia transmission. Hence, instead of employing complicated channel codes that reduce the information rate, one can exploit diffusion priors for reliable data reconstruction, especially under extreme channel conditions due to low signal-to-noise ratio (SNR), or hardware-impaired communications. The proposed CDiff-assisted receiver is tailored for the scenario of wireless image transmission using MNIST dataset. Our numerical results highlight the reconstruction performance of our scheme compared to the conventional digital communication, as well as the deep neural network (DNN)-based benchmark. It is also shown that more than 10 dB improvement in the reconstruction could be achieved in low SNR regimes, without the need to reduce the information rate for error correction.
本文提出了条件去噪扩散概率模型(CDiffs)来增强无线信道上的数据传输和重构。扩散模型的基本机制是将数据生成过程分解为所谓的“去噪”步骤。受此启发,关键思想是利用扩散模型的生成先验来学习信息信号的“噪声到清洁”转换,以帮助增强数据重建。所提出的方案可能有利于可获得信息内容的先验知识的通信场景,例如在多媒体传输中。因此,与其使用降低信息速率的复杂信道代码,不如利用扩散先验进行可靠的数据重建,特别是在由于低信噪比(SNR)或硬件受损通信而导致的极端信道条件下。提出的cdiff辅助接收机是针对使用MNIST数据集的无线图像传输场景量身定制的。与传统的数字通信以及基于深度神经网络(DNN)的基准相比,我们的数值结果突出了我们的方案的重建性能。研究还表明,在低信噪比的情况下,不需要降低纠错的信息率,就可以实现10 dB以上的重建改进。
{"title":"Conditional Denoising Diffusion Probabilistic Models for Data Reconstruction Enhancement in Wireless Communications","authors":"Mehdi Letafati;Samad Ali;Matti Latva-Aho","doi":"10.1109/TMLCN.2024.3522872","DOIUrl":"https://doi.org/10.1109/TMLCN.2024.3522872","url":null,"abstract":"In this paper, conditional denoising diffusion probabilistic models (CDiffs) are proposed to enhance the data transmission and reconstruction over wireless channels. The underlying mechanism of diffusion models is to decompose the data generation process over the so-called “denoising” steps. Inspired by this, the key idea is to leverage the generative prior of diffusion models in learning a “noisy-to-clean” transformation of the information signal to help enhance data reconstruction. The proposed scheme could be beneficial for communication scenarios in which a prior knowledge of the information content is available, e.g., in multimedia transmission. Hence, instead of employing complicated channel codes that reduce the information rate, one can exploit diffusion priors for reliable data reconstruction, especially under extreme channel conditions due to low signal-to-noise ratio (SNR), or hardware-impaired communications. The proposed CDiff-assisted receiver is tailored for the scenario of wireless image transmission using MNIST dataset. Our numerical results highlight the reconstruction performance of our scheme compared to the conventional digital communication, as well as the deep neural network (DNN)-based benchmark. It is also shown that more than 10 dB improvement in the reconstruction could be achieved in low SNR regimes, without the need to reduce the information rate for error correction.","PeriodicalId":100641,"journal":{"name":"IEEE Transactions on Machine Learning in Communications and Networking","volume":"3 ","pages":"133-146"},"PeriodicalIF":0.0,"publicationDate":"2024-12-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10816175","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142912491","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multi-Agent Reinforcement Learning With Action Masking for UAV-Enabled Mobile Communications 基于动作掩蔽的无人机移动通信多智能体强化学习
Pub Date : 2024-12-23 DOI: 10.1109/TMLCN.2024.3521876
Danish Rizvi;David Boyle
Unmanned Aerial Vehicles (UAVs) are increasingly used as aerial base stations to provide ad hoc communications infrastructure. Building upon prior research efforts which consider either static nodes, 2D trajectories or single UAV systems, this paper focuses on the use of multiple UAVs for providing wireless communication to mobile users in the absence of terrestrial communications infrastructure. In particular, we jointly optimize UAV 3D trajectory and NOMA power allocation to maximize system throughput. Firstly, a weighted K-means-based clustering algorithm establishes UAV-user associations at regular intervals. Then the efficacy of training a novel Shared Deep Q-Network (SDQN) with action masking is explored. Unlike training each UAV separately using DQN, the SDQN reduces training time by using the experiences of multiple UAVs instead of a single agent. We also show that SDQN can be used to train a multi-agent system with differing action spaces. Simulation results confirm that: 1) training a shared DQN outperforms a conventional DQN in terms of maximum system throughput (+20%) and training time (-10%); 2) it can converge for agents with different action spaces, yielding a 9% increase in throughput compared to Mutual DQN algorithm; and 3) combining NOMA with an SDQN architecture enables the network to achieve a better sum rate compared with existing baseline schemes.
无人驾驶飞行器(uav)越来越多地被用作空中基站,以提供自组织通信基础设施。在先前考虑静态节点、二维轨迹或单个无人机系统的研究成果的基础上,本文重点研究了在没有地面通信基础设施的情况下,使用多个无人机为移动用户提供无线通信。特别是,我们共同优化了无人机的3D轨迹和NOMA功率分配,以最大限度地提高系统吞吐量。首先,基于加权k均值的聚类算法以一定的间隔建立无人机用户关联。然后探讨了带动作掩蔽的新型共享深度q网络(SDQN)的训练效果。与使用DQN单独训练每架无人机不同,SDQN通过使用多架无人机的经验而不是单个代理来减少训练时间。我们还证明了SDQN可以用于训练具有不同动作空间的多智能体系统。仿真结果证实:1)在最大系统吞吐量(+20%)和训练时间(-10%)方面,训练共享DQN优于传统DQN;2)对于具有不同动作空间的智能体可以收敛,吞吐量比Mutual DQN算法提高9%;3)与现有基准方案相比,NOMA与SDQN架构的结合使网络能够获得更好的求和速率。
{"title":"Multi-Agent Reinforcement Learning With Action Masking for UAV-Enabled Mobile Communications","authors":"Danish Rizvi;David Boyle","doi":"10.1109/TMLCN.2024.3521876","DOIUrl":"https://doi.org/10.1109/TMLCN.2024.3521876","url":null,"abstract":"Unmanned Aerial Vehicles (UAVs) are increasingly used as aerial base stations to provide ad hoc communications infrastructure. Building upon prior research efforts which consider either static nodes, 2D trajectories or single UAV systems, this paper focuses on the use of multiple UAVs for providing wireless communication to mobile users in the absence of terrestrial communications infrastructure. In particular, we jointly optimize UAV 3D trajectory and NOMA power allocation to maximize system throughput. Firstly, a weighted K-means-based clustering algorithm establishes UAV-user associations at regular intervals. Then the efficacy of training a novel Shared Deep Q-Network (SDQN) with action masking is explored. Unlike training each UAV separately using DQN, the SDQN reduces training time by using the experiences of multiple UAVs instead of a single agent. We also show that SDQN can be used to train a multi-agent system with differing action spaces. Simulation results confirm that: 1) training a shared DQN outperforms a conventional DQN in terms of maximum system throughput (+20%) and training time (-10%); 2) it can converge for agents with different action spaces, yielding a 9% increase in throughput compared to Mutual DQN algorithm; and 3) combining NOMA with an SDQN architecture enables the network to achieve a better sum rate compared with existing baseline schemes.","PeriodicalId":100641,"journal":{"name":"IEEE Transactions on Machine Learning in Communications and Networking","volume":"3 ","pages":"117-132"},"PeriodicalIF":0.0,"publicationDate":"2024-12-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10812765","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142905826","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Online Learning for Intelligent Thermal Management of Interference-Coupled and Passively Cooled Base Stations 干扰耦合被动冷却基站智能热管理在线学习
Pub Date : 2024-12-16 DOI: 10.1109/TMLCN.2024.3517619
Zhanwei Yu;Yi Zhao;Xiaoli Chu;Di Yuan
Passively cooled base stations (PCBSs) have emerged to deliver better cost and energy efficiency. However, passive cooling necessitates intelligent thermal control via traffic management, i.e., the instantaneous data traffic or throughput of a PCBS directly impacts its thermal performance. This is particularly challenging for outdoor deployment of PCBSs because the heat dissipation efficiency is uncertain and fluctuates over time. What is more, the PCBSs are interference-coupled in multi-cell scenarios. Thus, a higher-throughput PCBS leads to higher interference to the other PCBSs, which, in turn, would require more resource consumption to meet their respective throughput targets. In this paper, we address online decision-making for maximizing the total downlink throughput for a multi-PCBS system subject to constraints related on operating temperature. We demonstrate that a reinforcement learning (RL) approach, specifically soft actor-critic (SAC), can successfully perform throughput maximization while keeping the PCBSs cool, by adapting the throughput to time-varying heat dissipation conditions. Furthermore, we design a denial and reward mechanism that effectively mitigates the risk of overheating during the exploration phase of RL. Simulation results show that our approach achieves up to 88.6% of the global optimum. This is very promising, as our approach operates without prior knowledge of future heat dissipation efficiency, which is required by the global optimum.
被动冷却基站(PCBSs)已经出现,以提供更好的成本和能源效率。然而,被动冷却需要通过流量管理进行智能热控制,即pcb的瞬时数据流量或吞吐量直接影响其热性能。这对于pcb的户外部署尤其具有挑战性,因为散热效率是不确定的,并且随着时间的推移而波动。更重要的是,pcb在多单元场景中是干扰耦合的。因此,更高吞吐量的pcb会导致对其他pcb的更高干扰,这反过来又需要更多的资源消耗来满足各自的吞吐量目标。在本文中,我们讨论了在线决策,以最大限度地提高受工作温度限制的多pcb系统的总下行吞吐量。我们证明了一种强化学习(RL)方法,特别是软行为者批评(SAC),可以通过使吞吐量适应时变的散热条件,在保持pcb冷却的同时成功地实现吞吐量最大化。此外,我们设计了一个拒绝和奖励机制,有效地降低了RL探索阶段过热的风险。仿真结果表明,该方法达到了全局最优解的88.6%。这是非常有希望的,因为我们的方法在没有全局最优所要求的未来散热效率的先验知识的情况下运行。
{"title":"Online Learning for Intelligent Thermal Management of Interference-Coupled and Passively Cooled Base Stations","authors":"Zhanwei Yu;Yi Zhao;Xiaoli Chu;Di Yuan","doi":"10.1109/TMLCN.2024.3517619","DOIUrl":"https://doi.org/10.1109/TMLCN.2024.3517619","url":null,"abstract":"Passively cooled base stations (PCBSs) have emerged to deliver better cost and energy efficiency. However, passive cooling necessitates intelligent thermal control via traffic management, i.e., the instantaneous data traffic or throughput of a PCBS directly impacts its thermal performance. This is particularly challenging for outdoor deployment of PCBSs because the heat dissipation efficiency is uncertain and fluctuates over time. What is more, the PCBSs are interference-coupled in multi-cell scenarios. Thus, a higher-throughput PCBS leads to higher interference to the other PCBSs, which, in turn, would require more resource consumption to meet their respective throughput targets. In this paper, we address online decision-making for maximizing the total downlink throughput for a multi-PCBS system subject to constraints related on operating temperature. We demonstrate that a reinforcement learning (RL) approach, specifically soft actor-critic (SAC), can successfully perform throughput maximization while keeping the PCBSs cool, by adapting the throughput to time-varying heat dissipation conditions. Furthermore, we design a denial and reward mechanism that effectively mitigates the risk of overheating during the exploration phase of RL. Simulation results show that our approach achieves up to 88.6% of the global optimum. This is very promising, as our approach operates without prior knowledge of future heat dissipation efficiency, which is required by the global optimum.","PeriodicalId":100641,"journal":{"name":"IEEE Transactions on Machine Learning in Communications and Networking","volume":"3 ","pages":"64-79"},"PeriodicalIF":0.0,"publicationDate":"2024-12-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10802970","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142858862","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Robust and Lightweight Modeling of IoT Network Behaviors From Raw Traffic Packets 基于原始流量数据包的物联网网络行为鲁棒轻量级建模
Pub Date : 2024-12-16 DOI: 10.1109/TMLCN.2024.3517613
Aleksandar Pasquini;Rajesh Vasa;Irini Logothetis;Hassan Habibi Gharakheili;Alexander Chambers;Minh Tran
Machine Learning (ML)-based techniques are increasingly used for network management tasks, such as intrusion detection, application identification, or asset management. Recent studies show that neural network-based traffic analysis can achieve performance comparable to human feature-engineered ML pipelines. However, neural networks provide this performance at a higher computational cost and complexity, due to high-throughput traffic conditions necessitating specialized hardware for real-time operations. This paper presents lightweight models for encoding characteristics of Internet-of-Things (IoT) network packets; 1) we present two strategies to encode packets (regardless of their size, encryption, and protocol) to integer vectors: a shallow lightweight neural network and compression. With a public dataset containing about 8 million packets emitted by 22 IoT device types, we show the encoded packets can form complete (up to 80%) and homogeneous (up to 89%) clusters; 2) we demonstrate the efficacy of our generated encodings in the downstream classification task and quantify their computing costs. We train three multi-class models to predict the IoT class given network packets and show our models can achieve the same levels of accuracy (94%) as deep neural network embeddings but with computing costs up to 10 times lower; 3) we examine how the amount of packet data (headers and payload) can affect the prediction quality. We demonstrate how the choice of Internet Protocol (IP) payloads strikes a balance between prediction accuracy (99%) and cost. Along with the cost-efficacy of models, this capability can result in rapid and accurate predictions, meeting the requirements of network operators.
基于机器学习(ML)的技术越来越多地用于网络管理任务,如入侵检测、应用程序识别或资产管理。最近的研究表明,基于神经网络的流量分析可以达到与人类特征工程ML管道相当的性能。然而,神经网络以更高的计算成本和复杂性提供这种性能,因为高吞吐量的流量条件需要专门的硬件来进行实时操作。本文提出了物联网(IoT)网络数据包编码特征的轻量级模型;1)我们提出了两种将数据包(无论其大小,加密和协议如何)编码为整数向量的策略:浅轻量级神经网络和压缩。使用包含22种物联网设备类型发出的约800万个数据包的公共数据集,我们显示编码的数据包可以形成完整(高达80%)和均匀(高达89%)的集群;2)我们证明了我们生成的编码在下游分类任务中的有效性,并量化了它们的计算成本。我们训练了三个多类模型来预测给定网络数据包的物联网类,并表明我们的模型可以达到与深度神经网络嵌入相同的精度水平(94%),但计算成本降低了10倍;3)我们检查数据包数据(报头和有效载荷)的数量如何影响预测质量。我们演示了互联网协议(IP)有效载荷的选择如何在预测精度(99%)和成本之间取得平衡。随着模型的成本效益,这种能力可以导致快速和准确的预测,满足网络运营商的要求。
{"title":"Robust and Lightweight Modeling of IoT Network Behaviors From Raw Traffic Packets","authors":"Aleksandar Pasquini;Rajesh Vasa;Irini Logothetis;Hassan Habibi Gharakheili;Alexander Chambers;Minh Tran","doi":"10.1109/TMLCN.2024.3517613","DOIUrl":"https://doi.org/10.1109/TMLCN.2024.3517613","url":null,"abstract":"Machine Learning (ML)-based techniques are increasingly used for network management tasks, such as intrusion detection, application identification, or asset management. Recent studies show that neural network-based traffic analysis can achieve performance comparable to human feature-engineered ML pipelines. However, neural networks provide this performance at a higher computational cost and complexity, due to high-throughput traffic conditions necessitating specialized hardware for real-time operations. This paper presents lightweight models for encoding characteristics of Internet-of-Things (IoT) network packets; 1) we present two strategies to encode packets (regardless of their size, encryption, and protocol) to integer vectors: a shallow lightweight neural network and compression. With a public dataset containing about 8 million packets emitted by 22 IoT device types, we show the encoded packets can form complete (up to 80%) and homogeneous (up to 89%) clusters; 2) we demonstrate the efficacy of our generated encodings in the downstream classification task and quantify their computing costs. We train three multi-class models to predict the IoT class given network packets and show our models can achieve the same levels of accuracy (94%) as deep neural network embeddings but with computing costs up to 10 times lower; 3) we examine how the amount of packet data (headers and payload) can affect the prediction quality. We demonstrate how the choice of Internet Protocol (IP) payloads strikes a balance between prediction accuracy (99%) and cost. Along with the cost-efficacy of models, this capability can result in rapid and accurate predictions, meeting the requirements of network operators.","PeriodicalId":100641,"journal":{"name":"IEEE Transactions on Machine Learning in Communications and Networking","volume":"3 ","pages":"98-116"},"PeriodicalIF":0.0,"publicationDate":"2024-12-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10802939","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142890343","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Self-Supervised Contrastive Learning for Joint Active and Passive Beamforming in RIS-Assisted MU-MIMO Systems ris辅助MU-MIMO系统联合主动和被动波束形成的自监督对比学习
Pub Date : 2024-12-11 DOI: 10.1109/TMLCN.2024.3515913
Zhizhou He;Fabien Héliot;Yi Ma
Reconfigurable Intelligent Surfaces (RIS) can enhance system performance at the cost of increased complexity in multi-user MIMO systems. The beamforming options scale with the number of antennas at the base station/RIS. Existing methods for solving this problem tend to use computationally intensive iterative methods that are non-scalable for large RIS-aided MIMO systems. We propose here a novel self-supervised contrastive learning neural network (NN) architecture to optimize the sum spectral efficiency through joint active and passive beamforming design in multi-user RIS-aided MIMO systems. Our scheme utilizes contrastive learning to capture the channel features from augmented channel data and then can be trained to perform beamforming with only 1% of labeled data. The labels are derived through a closed-form optimization algorithm, leveraging a sequential fractional programming approach. Leveraging the proposed self-supervised design helps to greatly reduce the computational complexity during the training phase. Moreover, our proposed model can operate under various noise levels by using data augmentation methods while maintaining a robust out-of-distribution performance under various propagation environments and different signal-to-noise ratios (SNR)s. During training, our proposed network only needs 10% of labeled data to converge when compared to supervised learning. Our trained NN can then achieve performance which is only $~7%$ and $~2.5%$ away from mathematical upper bound and fully supervised learning, respectively, with far less computational complexity.
在多用户MIMO系统中,可重构智能表面(RIS)可以以增加复杂性为代价来提高系统性能。波束形成选项与基站/RIS的天线数量有关。解决这一问题的现有方法倾向于使用计算密集型的迭代方法,这些方法对于大型ris辅助MIMO系统来说是不可扩展的。本文提出了一种新的自监督对比学习神经网络(NN)架构,通过联合主动和被动波束形成设计来优化多用户ris辅助MIMO系统的总频谱效率。我们的方案利用对比学习从增强的信道数据中捕获信道特征,然后可以训练仅使用1%的标记数据执行波束形成。标签是通过一个封闭形式的优化算法派生的,利用顺序分数规划方法。利用所提出的自监督设计有助于大大降低训练阶段的计算复杂度。此外,我们提出的模型可以使用数据增强方法在各种噪声水平下运行,同时在各种传播环境和不同信噪比(SNR)下保持鲁棒的分布外性能。在训练过程中,与监督学习相比,我们提出的网络只需要10%的标记数据就可以收敛。然后,我们训练的神经网络可以分别实现距离数学上界和完全监督学习仅7%和2.5%的性能,并且计算复杂度要低得多。
{"title":"Self-Supervised Contrastive Learning for Joint Active and Passive Beamforming in RIS-Assisted MU-MIMO Systems","authors":"Zhizhou He;Fabien Héliot;Yi Ma","doi":"10.1109/TMLCN.2024.3515913","DOIUrl":"https://doi.org/10.1109/TMLCN.2024.3515913","url":null,"abstract":"Reconfigurable Intelligent Surfaces (RIS) can enhance system performance at the cost of increased complexity in multi-user MIMO systems. The beamforming options scale with the number of antennas at the base station/RIS. Existing methods for solving this problem tend to use computationally intensive iterative methods that are non-scalable for large RIS-aided MIMO systems. We propose here a novel self-supervised contrastive learning neural network (NN) architecture to optimize the sum spectral efficiency through joint active and passive beamforming design in multi-user RIS-aided MIMO systems. Our scheme utilizes contrastive learning to capture the channel features from augmented channel data and then can be trained to perform beamforming with only 1% of labeled data. The labels are derived through a closed-form optimization algorithm, leveraging a sequential fractional programming approach. Leveraging the proposed self-supervised design helps to greatly reduce the computational complexity during the training phase. Moreover, our proposed model can operate under various noise levels by using data augmentation methods while maintaining a robust out-of-distribution performance under various propagation environments and different signal-to-noise ratios (SNR)s. During training, our proposed network only needs 10% of labeled data to converge when compared to supervised learning. Our trained NN can then achieve performance which is only \u0000<inline-formula> <tex-math>$~7%$ </tex-math></inline-formula>\u0000 and \u0000<inline-formula> <tex-math>$~2.5%$ </tex-math></inline-formula>\u0000 away from mathematical upper bound and fully supervised learning, respectively, with far less computational complexity.","PeriodicalId":100641,"journal":{"name":"IEEE Transactions on Machine Learning in Communications and Networking","volume":"3 ","pages":"147-162"},"PeriodicalIF":0.0,"publicationDate":"2024-12-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10793234","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142912511","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
IEEE Transactions on Machine Learning in Communications and Networking
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1