首页 > 最新文献

IEEE Transactions on Machine Learning in Communications and Networking最新文献

英文 中文
Resource Optimization in Multi-Hop IAB Networks: Balancing Data Freshness and Spectral Efficiency 多跳IAB网络中的资源优化:平衡数据新鲜度和频谱效率
Pub Date : 2025-11-20 DOI: 10.1109/TMLCN.2025.3635578
Sarder Fakhrul Abedin;Aamir Mahmood;Zhu Han;Mikael Gidlund
This work proposes a multi-objective resource optimization framework for integrated access and backhaul (IAB) networks, tackling the dual challenges of timely data updates and spectral efficiency under dynamic wireless conditions. Conventional single-objective optimization is often impractical for IAB networks, where objective preferences are unknown or difficult to predefine. Therefore, we formulate a multi-objective problem that minimizes the age of information (AoI) and maximizes spectral efficiency, subject to a risk-aware AoI constraint, access-backhaul throughput fairness, and other contextual requirements. A lightweight proportional fair (PF) scheduling algorithm first handles user association and access resource allocation. Subsequently, a Pareto Q-learning-based reinforcement learning (RL) scheme allocates backhaul resources, with the PF scheduler’s outcomes integrated into the state and constrained action spaces of a Markov decision process (MDP). The reward function balances AoI and spectral efficiency objectives while explicitly capturing fairness, thereby resulting in robust long-term performance without imposing fixed weights. Furthermore, an adaptive value-difference-based exploration technique adjusts exploration rates based on Q-value estimate variances, promoting strategic exploration for optimal trade-offs. Simulations show that the proposed method outperforms baselines, reducing the convexity gap between approximated and optimal Pareto fronts by 68.6% and improving fairness by 16.9%.
本文提出了一种用于综合接入回程(IAB)网络的多目标资源优化框架,解决了动态无线条件下数据及时更新和频谱效率的双重挑战。传统的单目标优化对于IAB网络通常是不切实际的,因为目标偏好是未知的或难以预先定义的。因此,我们制定了一个多目标问题,最小化信息年龄(AoI)和最大化频谱效率,受风险感知AoI约束、访问-回程吞吐量公平和其他上下文要求的约束。轻量级比例公平(PF)调度算法首先处理用户关联和访问资源分配。随后,基于Pareto q -learning的强化学习(RL)方案分配回程资源,将PF调度程序的结果集成到马尔可夫决策过程(MDP)的状态和约束动作空间中。奖励函数平衡AoI和频谱效率目标,同时明确捕获公平性,从而在不施加固定权重的情况下产生稳健的长期性能。此外,基于值差的自适应勘探技术根据q值估计方差调整勘探速率,促进最佳权衡的战略勘探。仿真结果表明,该方法优于基线,将逼近和最优Pareto前沿之间的凸度差距减小了68.6%,公平性提高了16.9%。
{"title":"Resource Optimization in Multi-Hop IAB Networks: Balancing Data Freshness and Spectral Efficiency","authors":"Sarder Fakhrul Abedin;Aamir Mahmood;Zhu Han;Mikael Gidlund","doi":"10.1109/TMLCN.2025.3635578","DOIUrl":"https://doi.org/10.1109/TMLCN.2025.3635578","url":null,"abstract":"This work proposes a multi-objective resource optimization framework for integrated access and backhaul (IAB) networks, tackling the dual challenges of timely data updates and spectral efficiency under dynamic wireless conditions. Conventional single-objective optimization is often impractical for IAB networks, where objective preferences are unknown or difficult to predefine. Therefore, we formulate a multi-objective problem that minimizes the age of information (AoI) and maximizes spectral efficiency, subject to a risk-aware AoI constraint, access-backhaul throughput fairness, and other contextual requirements. A lightweight proportional fair (PF) scheduling algorithm first handles user association and access resource allocation. Subsequently, a Pareto Q-learning-based reinforcement learning (RL) scheme allocates backhaul resources, with the PF scheduler’s outcomes integrated into the state and constrained action spaces of a Markov decision process (MDP). The reward function balances AoI and spectral efficiency objectives while explicitly capturing fairness, thereby resulting in robust long-term performance without imposing fixed weights. Furthermore, an adaptive value-difference-based exploration technique adjusts exploration rates based on Q-value estimate variances, promoting strategic exploration for optimal trade-offs. Simulations show that the proposed method outperforms baselines, reducing the convexity gap between approximated and optimal Pareto fronts by 68.6% and improving fairness by 16.9%.","PeriodicalId":100641,"journal":{"name":"IEEE Transactions on Machine Learning in Communications and Networking","volume":"3 ","pages":"1287-1310"},"PeriodicalIF":0.0,"publicationDate":"2025-11-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11262194","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145674804","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Communication Efficient Federated Learning With Quantization-Aware Training Design 沟通高效联邦学习与量化感知训练设计
Pub Date : 2025-11-19 DOI: 10.1109/TMLCN.2025.3635050
Xiang Fang;Li Chen;Huarui Yin;Xiaohui Chen;Weidong Wang
Model quantization is an effective method that can improve communication efficiency in federated learning (FL). The existing FL quantization protocols almost stay at the level of post-training quantization (PTQ), which comes at the cost of large quantization loss, especially in the setting of low-bits quantization. In this work, we propose a FL quantization training strategy to reduce the impact of quantization on model quality. Specifically, we first apply quantization-aware training (QAT) to FL (QAT-FL), which reduces quantization distortion by adding a fake-quantization module to the model so that the model could perceive future quantization during training. The convergence guarantee of the QAT-FL algorithm is established under certain assumptions. On the basis of the QAT-FL algorithm, we extend the discussion of non-uniform quantization and the adaptive algorithm, so that the model can adaptively adjust the parametric distribution and the number of quantization bits to reduce the amount of traffic in training. Experimental results based on MNIST, CIFAR-10 and FEMNIST datasets show that QAT-FL has advantages in terms of training loss and model inference accuracy, and adaptive-bits quantization of QAT-FL also greatly improves communication efficiency.
模型量化是提高联邦学习通信效率的一种有效方法。现有的FL量化协议几乎停留在训练后量化(PTQ)的水平,这是以很大的量化损失为代价的,特别是在低比特量化的情况下。在这项工作中,我们提出了一种FL量化训练策略,以减少量化对模型质量的影响。具体而言,我们首先将量化感知训练(QAT)应用于FL (QAT-FL),该方法通过在模型中添加假量化模块来减少量化失真,从而使模型在训练过程中能够感知未来的量化。在一定的假设条件下,建立了QAT-FL算法的收敛性保证。在QAT-FL算法的基础上,扩展了对非均匀量化和自适应算法的讨论,使模型能够自适应地调整参数分布和量化比特数,以减少训练流量。基于MNIST、CIFAR-10和FEMNIST数据集的实验结果表明,QAT-FL在训练损失和模型推理精度方面具有优势,QAT-FL的自适应比特量化也大大提高了通信效率。
{"title":"Communication Efficient Federated Learning With Quantization-Aware Training Design","authors":"Xiang Fang;Li Chen;Huarui Yin;Xiaohui Chen;Weidong Wang","doi":"10.1109/TMLCN.2025.3635050","DOIUrl":"https://doi.org/10.1109/TMLCN.2025.3635050","url":null,"abstract":"Model quantization is an effective method that can improve communication efficiency in federated learning (FL). The existing FL quantization protocols almost stay at the level of post-training quantization (PTQ), which comes at the cost of large quantization loss, especially in the setting of low-bits quantization. In this work, we propose a FL quantization training strategy to reduce the impact of quantization on model quality. Specifically, we first apply quantization-aware training (QAT) to FL (QAT-FL), which reduces quantization distortion by adding a fake-quantization module to the model so that the model could perceive future quantization during training. The convergence guarantee of the QAT-FL algorithm is established under certain assumptions. On the basis of the QAT-FL algorithm, we extend the discussion of non-uniform quantization and the adaptive algorithm, so that the model can adaptively adjust the parametric distribution and the number of quantization bits to reduce the amount of traffic in training. Experimental results based on MNIST, CIFAR-10 and FEMNIST datasets show that QAT-FL has advantages in terms of training loss and model inference accuracy, and adaptive-bits quantization of QAT-FL also greatly improves communication efficiency.","PeriodicalId":100641,"journal":{"name":"IEEE Transactions on Machine Learning in Communications and Networking","volume":"4 ","pages":"45-59"},"PeriodicalIF":0.0,"publicationDate":"2025-11-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11260453","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145729468","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Task-Specific Sharpness-Aware O-RAN Resource Management Using Multi-Agent Reinforcement Learning 基于多智能体强化学习的特定任务锐度感知O-RAN资源管理
Pub Date : 2025-11-19 DOI: 10.1109/TMLCN.2025.3634994
Fatemeh Lotfi;Hossein Rajoli;Fatemeh Afghah
Next-generation networks utilize the Open Radio Access Network (O-RAN) architecture to enable dynamic resource management, facilitated by the RAN Intelligent Controller (RIC). While deep reinforcement learning (DRL) models show promise in optimizing network resources, they often struggle with robustness and generalizability in dynamic environments. This paper introduces a novel resource management approach that enhances the Soft Actor Critic (SAC) algorithm with Sharpness-Aware Minimization (SAM) in a distributed Multi-Agent RL (MARL) framework. Our method introduces an adaptive and selective SAM mechanism, where regularization is explicitly driven by temporal-difference (TD)-error variance, ensuring that only agents facing high environmental complexity are regularized. This targeted strategy reduces unnecessary overhead, improves training stability, and enhances generalization without sacrificing learning efficiency. We further incorporate a dynamic $rho $ scheduling scheme to refine the exploration-exploitation trade-off across agents. Experimental results show our method significantly outperforms conventional DRL approaches, yielding up to a 22% improvement in resource allocation efficiency and ensuring superior QoS satisfaction across diverse O-RAN slices.
下一代网络利用开放无线接入网(O-RAN)架构实现动态资源管理,由RAN智能控制器(RIC)提供便利。虽然深度强化学习(DRL)模型在优化网络资源方面表现出了希望,但它们在动态环境中经常与鲁棒性和泛化性作斗争。本文介绍了一种新的资源管理方法,该方法在分布式多智能体RL (MARL)框架中使用锐度感知最小化(SAM)来增强软行为者评价(SAC)算法。我们的方法引入了一种自适应和选择性SAM机制,其中正则化是由时间差(TD)误差方差明确驱动的,确保只有面对高环境复杂性的代理才被正则化。这种有针对性的策略减少了不必要的开销,提高了训练稳定性,并在不牺牲学习效率的情况下增强了泛化。我们进一步结合了一个动态的$rho $调度方案,以改进跨代理的探索-开发权衡。实验结果表明,我们的方法显著优于传统的DRL方法,在资源分配效率上提高了22%,并在不同的O-RAN切片中确保了更高的QoS满意度。
{"title":"Task-Specific Sharpness-Aware O-RAN Resource Management Using Multi-Agent Reinforcement Learning","authors":"Fatemeh Lotfi;Hossein Rajoli;Fatemeh Afghah","doi":"10.1109/TMLCN.2025.3634994","DOIUrl":"https://doi.org/10.1109/TMLCN.2025.3634994","url":null,"abstract":"Next-generation networks utilize the Open Radio Access Network (O-RAN) architecture to enable dynamic resource management, facilitated by the RAN Intelligent Controller (RIC). While deep reinforcement learning (DRL) models show promise in optimizing network resources, they often struggle with robustness and generalizability in dynamic environments. This paper introduces a novel resource management approach that enhances the Soft Actor Critic (SAC) algorithm with Sharpness-Aware Minimization (SAM) in a distributed Multi-Agent RL (MARL) framework. Our method introduces an adaptive and selective SAM mechanism, where regularization is explicitly driven by temporal-difference (TD)-error variance, ensuring that only agents facing high environmental complexity are regularized. This targeted strategy reduces unnecessary overhead, improves training stability, and enhances generalization without sacrificing learning efficiency. We further incorporate a dynamic <inline-formula> <tex-math>$rho $ </tex-math></inline-formula> scheduling scheme to refine the exploration-exploitation trade-off across agents. Experimental results show our method significantly outperforms conventional DRL approaches, yielding up to a 22% improvement in resource allocation efficiency and ensuring superior QoS satisfaction across diverse O-RAN slices.","PeriodicalId":100641,"journal":{"name":"IEEE Transactions on Machine Learning in Communications and Networking","volume":"4 ","pages":"98-114"},"PeriodicalIF":0.0,"publicationDate":"2025-11-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11260483","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145778407","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Reinforcement Learning Framework for Resource Allocation in Uplink Carrier Aggregation in the Presence of Self Interference 存在自干扰的上行载波聚合资源分配的强化学习框架
Pub Date : 2025-11-14 DOI: 10.1109/TMLCN.2025.3633248
Jaswanth Bodempudi;Batta Siva Sairam;Madepalli Haritha;Sandesh Rao Mattu;Ananthanarayanan Chockalingam
To meet the ever-increasing demand for higher data rates in mobile networks across generations, many novel schemes have been proposed in the standards. One such scheme is carrier aggregation (CA). Simply put, CA is a technique that allows mobile networks to combine multiple carriers to increase data rate and improve network efficiency. On the uplink, for power constrained users, this translates to the need for an efficient resource allocation scheme, where each user distributes its available power among its assigned uplink carriers. Choosing a good set of carriers and allocating appropriate power on the carriers is of paramount importance for good performance. Another factor that is critical to obtaining good performance is how well the degradation caused by the harmonic/intermodulation terms generated by the user’s transmitter non-linearities is handled. Specifically, for example, if the carrier allocation is such that a harmonic of a user’s uplink carrier falls on the downlink frequency of that user, it leads to a self coupling-induced sensitivity degradation of that user’s downlink receiver. Considering these factors, in this paper, we model the uplink carrier aggregation problem as an optimal resource allocation problem with the associated constraints of non-linearities induced self interference (SI). This involves optimization over a discrete variable (which carriers need to be turned on) and a continuous variable (what power needs to be allocated on the selected carriers) in dynamic environments, a problem which is hard to solve using traditional methods owing to the mixed nature of the optimization variables and the additional need to consider the SI constraint in the problem. Therefore, in this paper, we adopt a reinforcement learning (RL) framework involving a compound-action actor-critic (CA2C) algorithm for the uplink carrier aggregation problem. We propose a novel reward function that is critical for enabling the proposed CA2C algorithm to efficiently handle SI. The CA2C algorithm along with the proposed reward function learns to assign and activate suitable carriers in an online fashion. Numerical results demonstrate that the proposed RL based scheme is able to achieve higher sum throughputs compared to naive schemes. The results also demonstrate that the proposed reward function allows the CA2C algorithm to adapt the optimization both in the presence and absence of SI.
为了满足移动网络中不断增长的跨代数据传输速率需求,标准中提出了许多新颖的方案。其中一种方案是载波聚合(CA)。简单地说,CA是一种允许移动网络组合多个运营商以提高数据速率和提高网络效率的技术。在上行链路上,对于功率受限的用户,这意味着需要一种有效的资源分配方案,其中每个用户将其可用功率分配给其分配的上行链路载波。选择一组好的载波并在载波上分配适当的功率对于获得良好的性能至关重要。另一个对获得良好性能至关重要的因素是如何处理由用户发射机非线性产生的谐波/互调项引起的退化。具体来说,例如,如果载波分配使得用户上行载波的谐波落在该用户的下行频率上,则会导致该用户下行接收器的自耦合引起的灵敏度下降。考虑到这些因素,本文将上行载波聚合问题建模为具有非线性诱导自干扰约束的最优资源分配问题。这涉及在动态环境中对离散变量(需要打开哪些载波)和连续变量(需要在选定的载波上分配哪些功率)进行优化,由于优化变量的混合性质以及考虑问题中的SI约束的额外需要,使用传统方法很难解决这个问题。因此,在本文中,我们采用强化学习(RL)框架,其中涉及复合动作actor-critic (CA2C)算法来解决上行载波聚合问题。我们提出了一种新的奖励函数,它对于使所提出的CA2C算法有效地处理SI至关重要。CA2C算法与所提出的奖励函数一起学习在线方式分配和激活合适的载体。数值结果表明,与原始方案相比,基于强化学习的方案能够获得更高的总吞吐量。结果还表明,所提出的奖励函数允许CA2C算法在存在和不存在SI的情况下都能适应优化。
{"title":"A Reinforcement Learning Framework for Resource Allocation in Uplink Carrier Aggregation in the Presence of Self Interference","authors":"Jaswanth Bodempudi;Batta Siva Sairam;Madepalli Haritha;Sandesh Rao Mattu;Ananthanarayanan Chockalingam","doi":"10.1109/TMLCN.2025.3633248","DOIUrl":"https://doi.org/10.1109/TMLCN.2025.3633248","url":null,"abstract":"To meet the ever-increasing demand for higher data rates in mobile networks across generations, many novel schemes have been proposed in the standards. One such scheme is carrier aggregation (CA). Simply put, CA is a technique that allows mobile networks to combine multiple carriers to increase data rate and improve network efficiency. On the uplink, for power constrained users, this translates to the need for an efficient resource allocation scheme, where each user distributes its available power among its assigned uplink carriers. Choosing a good set of carriers and allocating appropriate power on the carriers is of paramount importance for good performance. Another factor that is critical to obtaining good performance is how well the degradation caused by the harmonic/intermodulation terms generated by the user’s transmitter non-linearities is handled. Specifically, for example, if the carrier allocation is such that a harmonic of a user’s uplink carrier falls on the downlink frequency of that user, it leads to a self coupling-induced sensitivity degradation of that user’s downlink receiver. Considering these factors, in this paper, we model the uplink carrier aggregation problem as an optimal resource allocation problem with the associated constraints of non-linearities induced self interference (SI). This involves optimization over a discrete variable (which carriers need to be turned on) and a continuous variable (what power needs to be allocated on the selected carriers) in dynamic environments, a problem which is hard to solve using traditional methods owing to the mixed nature of the optimization variables and the additional need to consider the SI constraint in the problem. Therefore, in this paper, we adopt a reinforcement learning (RL) framework involving a compound-action actor-critic (CA2C) algorithm for the uplink carrier aggregation problem. We propose a novel reward function that is critical for enabling the proposed CA2C algorithm to efficiently handle SI. The CA2C algorithm along with the proposed reward function learns to assign and activate suitable carriers in an online fashion. Numerical results demonstrate that the proposed RL based scheme is able to achieve higher sum throughputs compared to naive schemes. The results also demonstrate that the proposed reward function allows the CA2C algorithm to adapt the optimization both in the presence and absence of SI.","PeriodicalId":100641,"journal":{"name":"IEEE Transactions on Machine Learning in Communications and Networking","volume":"3 ","pages":"1265-1286"},"PeriodicalIF":0.0,"publicationDate":"2025-11-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11248959","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145729284","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Toward Autonomous and Efficient Cybersecurity: A Multi-Objective AutoML-Based Intrusion Detection System 迈向自主高效的网络安全:一种多目标自动入侵检测系统
Pub Date : 2025-11-11 DOI: 10.1109/TMLCN.2025.3631379
Li Yang;Abdallah Shami
With increasingly sophisticated cybersecurity threats and rising demand for network automation, autonomous cybersecurity mechanisms are becoming critical for securing modern networks. The rapid expansion of Internet of Things (IoT) systems amplifies these challenges, as resource-constrained IoT devices demand scalable and efficient security solutions. In this work, an innovative Intrusion Detection System (IDS) utilizing Automated Machine Learning (AutoML) and Multi-Objective Optimization (MOO) is proposed for autonomous and optimized cyber-attack detection in modern networking environments. The proposed IDS framework integrates two primary innovative techniques: Optimized Importance and Percentage-based Automated Feature Selection (OIP-AutoFS) and Optimized Performance, Confidence, and Efficiency-based Combined Algorithm Selection and Hyperparameter Optimization (OPCE-CASH). These components optimize feature selection and model learning processes to strike a balance between intrusion detection effectiveness and computational efficiency. This work presents the first IDS framework that integrates all four AutoML stages and employs multi-objective optimization to jointly optimize detection effectiveness, efficiency, and confidence for deployment in resource-constrained systems. Experimental evaluations over two benchmark cybersecurity datasets demonstrate that the proposed MOO-AutoML IDS outperforms state-of-the-art IDSs, establishing a new benchmark for autonomous, efficient, and optimized security for networks. Designed to support IoT and edge environments with resource constraints, the proposed framework is applicable to a variety of autonomous cybersecurity applications across diverse networked environments.
随着网络安全威胁的日益复杂和对网络自动化需求的不断增长,自主网络安全机制对现代网络的安全至关重要。物联网(IoT)系统的快速扩展放大了这些挑战,因为资源有限的物联网设备需要可扩展且高效的安全解决方案。在这项工作中,提出了一种利用自动机器学习(AutoML)和多目标优化(MOO)的创新入侵检测系统(IDS),用于现代网络环境中自主和优化的网络攻击检测。提出的IDS框架集成了两种主要的创新技术:基于优化重要性和百分比的自动特征选择(OIP-AutoFS)和基于优化性能、置信度和效率的组合算法选择和超参数优化(OPCE-CASH)。这些组件优化特征选择和模型学习过程,在入侵检测有效性和计算效率之间取得平衡。这项工作提出了第一个集成了所有四个AutoML阶段的IDS框架,并采用多目标优化来共同优化在资源受限系统中部署的检测有效性、效率和信心。在两个基准网络安全数据集上的实验评估表明,所提出的MOO-AutoML IDS优于最先进的IDS,为网络的自主、高效和优化安全性建立了新的基准。该框架旨在支持具有资源约束的物联网和边缘环境,适用于不同网络环境中的各种自主网络安全应用。
{"title":"Toward Autonomous and Efficient Cybersecurity: A Multi-Objective AutoML-Based Intrusion Detection System","authors":"Li Yang;Abdallah Shami","doi":"10.1109/TMLCN.2025.3631379","DOIUrl":"https://doi.org/10.1109/TMLCN.2025.3631379","url":null,"abstract":"With increasingly sophisticated cybersecurity threats and rising demand for network automation, autonomous cybersecurity mechanisms are becoming critical for securing modern networks. The rapid expansion of Internet of Things (IoT) systems amplifies these challenges, as resource-constrained IoT devices demand scalable and efficient security solutions. In this work, an innovative Intrusion Detection System (IDS) utilizing Automated Machine Learning (AutoML) and Multi-Objective Optimization (MOO) is proposed for autonomous and optimized cyber-attack detection in modern networking environments. The proposed IDS framework integrates two primary innovative techniques: Optimized Importance and Percentage-based Automated Feature Selection (OIP-AutoFS) and Optimized Performance, Confidence, and Efficiency-based Combined Algorithm Selection and Hyperparameter Optimization (OPCE-CASH). These components optimize feature selection and model learning processes to strike a balance between intrusion detection effectiveness and computational efficiency. This work presents the first IDS framework that integrates all four AutoML stages and employs multi-objective optimization to jointly optimize detection effectiveness, efficiency, and confidence for deployment in resource-constrained systems. Experimental evaluations over two benchmark cybersecurity datasets demonstrate that the proposed MOO-AutoML IDS outperforms state-of-the-art IDSs, establishing a new benchmark for autonomous, efficient, and optimized security for networks. Designed to support IoT and edge environments with resource constraints, the proposed framework is applicable to a variety of autonomous cybersecurity applications across diverse networked environments.","PeriodicalId":100641,"journal":{"name":"IEEE Transactions on Machine Learning in Communications and Networking","volume":"3 ","pages":"1244-1264"},"PeriodicalIF":0.0,"publicationDate":"2025-11-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11240569","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145560635","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Generation of Orthogonal THz Pulses for Wireless Communications Based on Diffractive Autoencoder Neural Networks 基于衍射自编码器神经网络的无线通信正交太赫兹脉冲的产生
Pub Date : 2025-11-07 DOI: 10.1109/TMLCN.2025.3630589
Xin Wang;Xudong Wang
Pulse-based systems provide a promising alternative to terahertz (THz) communications, especially for joint communication and sensing applications. For such a system, THz Gaussian pulses are the fundamental and commonly used waveforms, but are susceptible to distortion due to their large bandwidth occupation. A critical yet unresolved issue is generating THz pulses with tunable center frequencies and bandwidths. In this paper, a THz-pulse generator is designed based on diffractive surfaces cascaded in multiple layers. Given the THz Gaussian pulse as an input, each surface has the ability to change its amplitude and phase and diffracts it to the next surface, so as to generate pulses with expected frequencies and bandwidths. To determine parameters for millions of elements on all surfaces and to handle the case of generating multiple THz pulses corresponding to the same THz Gaussian input signal, a diffractive autoencoder neural network (DANN) is developed. Subsequently, using the generated pulses for data transmission under THz channel, the symbol error rate (SER) performance is analyzed. Extensive simulations are conducted to validate and evaluate the DANN-based THz pulse generator. Additionally, using just 5 diffractive surfaces, the generator can support at least 10 pairs of orthogonal THz pulses with a correlation error ratio of less than $10^{-1}$ .
基于脉冲的系统为太赫兹(THz)通信提供了一个有前途的替代方案,特别是在联合通信和传感应用中。对于这样的系统,太赫兹高斯脉冲是基本的和常用的波形,但由于其大带宽占用而容易失真。一个关键但尚未解决的问题是产生具有可调谐中心频率和带宽的太赫兹脉冲。本文设计了一种基于衍射面多层级联的太赫兹脉冲发生器。给定太赫兹高斯脉冲作为输入,每个表面都有能力改变其振幅和相位,并将其衍射到下一个表面,从而产生具有预期频率和带宽的脉冲。为了确定所有表面上数百万个元件的参数,并处理产生与同一太赫兹高斯输入信号对应的多个太赫兹脉冲的情况,开发了一种衍射自编码器神经网络(DANN)。然后,利用生成的脉冲在太赫兹信道下进行数据传输,分析了码元误码率(SER)性能。进行了大量的仿真来验证和评估基于dann的太赫兹脉冲发生器。此外,仅使用5个衍射面,发生器可以支持至少10对正交太赫兹脉冲,相关误差比小于$10^{-1}$。
{"title":"Generation of Orthogonal THz Pulses for Wireless Communications Based on Diffractive Autoencoder Neural Networks","authors":"Xin Wang;Xudong Wang","doi":"10.1109/TMLCN.2025.3630589","DOIUrl":"https://doi.org/10.1109/TMLCN.2025.3630589","url":null,"abstract":"Pulse-based systems provide a promising alternative to terahertz (THz) communications, especially for joint communication and sensing applications. For such a system, THz Gaussian pulses are the fundamental and commonly used waveforms, but are susceptible to distortion due to their large bandwidth occupation. A critical yet unresolved issue is generating THz pulses with tunable center frequencies and bandwidths. In this paper, a THz-pulse generator is designed based on diffractive surfaces cascaded in multiple layers. Given the THz Gaussian pulse as an input, each surface has the ability to change its amplitude and phase and diffracts it to the next surface, so as to generate pulses with expected frequencies and bandwidths. To determine parameters for millions of elements on all surfaces and to handle the case of generating multiple THz pulses corresponding to the same THz Gaussian input signal, a diffractive autoencoder neural network (DANN) is developed. Subsequently, using the generated pulses for data transmission under THz channel, the symbol error rate (SER) performance is analyzed. Extensive simulations are conducted to validate and evaluate the DANN-based THz pulse generator. Additionally, using just 5 diffractive surfaces, the generator can support at least 10 pairs of orthogonal THz pulses with a correlation error ratio of less than <inline-formula> <tex-math>$10^{-1}$ </tex-math></inline-formula>.","PeriodicalId":100641,"journal":{"name":"IEEE Transactions on Machine Learning in Communications and Networking","volume":"3 ","pages":"1210-1226"},"PeriodicalIF":0.0,"publicationDate":"2025-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11231335","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145510074","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Diffusion-Aided Joint Source Channel Coding for High Realism Wireless Image Transmission 高真实感无线图像传输的扩散辅助联合源信道编码
Pub Date : 2025-11-03 DOI: 10.1109/TMLCN.2025.3628535
Mingyu Yang;Bowen Liu;Boyang Wang;Hun-Seok Kim
Deep learning-based joint source-channel coding (deep JSCC) has been demonstrated to be an effective approach for wireless image transmission. However, many current approaches utilize an autoencoder framework to optimize conventional metrics such as Mean Squared Error (MSE) and Structural Similarity Index (SSIM), which are inadequate for preserving the perceptual quality of reconstructed images. Such an issue is more prominent under stringent bandwidth constraints or low signal-to-noise ratio (SNR) conditions. To tackle this challenge, we propose DiffJSCC, a novel framework that leverages the prior knowledge of the pre-trained Stable Diffusion model to produce high-realism images via the conditional diffusion denoising process. First, our DiffJSCC employs an autoencoder structure similar to prior deep JSCC works to generate an initial image reconstruction from the noisy channel symbols. This preliminary reconstruction serves as an intermediate step where robust multimodal spatial and textual features are extracted. In the following diffusion step, DiffJSCC uses the derived multimodal features, together with channel state information such as the signal-to-noise ratio (SNR) and channel gain, to guide the diffusion denoising process through a novel control module. To maintain the balance between realism and fidelity, an optional intermediate guidance approach using the initial image reconstruction is implemented. Extensive experiments on diverse datasets reveal that our method significantly surpasses prior deep JSCC approaches on both perceptual metrics and downstream task performance, showcasing its ability to preserve the semantics of the original transmitted images. Notably, DiffJSCC can achieve highly realistic reconstructions for $768times 512$ pixel Kodak images with only 3072 symbols (<0.008>https://github.com/mingyuyng/DiffJSCC
基于深度学习的联合源信道编码(Deep JSCC)已被证明是一种有效的无线图像传输方法。然而,许多当前的方法利用自编码器框架来优化传统的度量,如均方误差(MSE)和结构相似性指数(SSIM),这不足以保持重建图像的感知质量。在严格的带宽限制或低信噪比(SNR)条件下,这个问题更加突出。为了解决这一挑战,我们提出了DiffJSCC,这是一个新的框架,它利用预训练的稳定扩散模型的先验知识,通过条件扩散去噪过程产生高真实感的图像。首先,我们的DiffJSCC采用了类似于之前的深度JSCC工作的自动编码器结构,从噪声通道符号生成初始图像重建。这种初步的重建是提取多模态空间和文本特征的中间步骤。在接下来的扩散步骤中,DiffJSCC使用衍生的多模态特征,以及信道状态信息(如信噪比(SNR)和信道增益),通过一个新的控制模块来指导扩散去噪过程。为了保持真实感和保真度之间的平衡,使用初始图像重建实现了一种可选的中间制导方法。在不同数据集上进行的大量实验表明,我们的方法在感知指标和下游任务性能方面都明显优于之前的深度JSCC方法,展示了其保留原始传输图像语义的能力。值得注意的是,DiffJSCC可以实现高度逼真的重建$768 × 512$像素的柯达图像,只有3072个符号(https://github.com/mingyuyng/DiffJSCC
{"title":"Diffusion-Aided Joint Source Channel Coding for High Realism Wireless Image Transmission","authors":"Mingyu Yang;Bowen Liu;Boyang Wang;Hun-Seok Kim","doi":"10.1109/TMLCN.2025.3628535","DOIUrl":"https://doi.org/10.1109/TMLCN.2025.3628535","url":null,"abstract":"Deep learning-based joint source-channel coding (deep JSCC) has been demonstrated to be an effective approach for wireless image transmission. However, many current approaches utilize an autoencoder framework to optimize conventional metrics such as Mean Squared Error (MSE) and Structural Similarity Index (SSIM), which are inadequate for preserving the perceptual quality of reconstructed images. Such an issue is more prominent under stringent bandwidth constraints or low signal-to-noise ratio (SNR) conditions. To tackle this challenge, we propose DiffJSCC, a novel framework that leverages the prior knowledge of the pre-trained Stable Diffusion model to produce high-realism images via the conditional diffusion denoising process. First, our DiffJSCC employs an autoencoder structure similar to prior deep JSCC works to generate an initial image reconstruction from the noisy channel symbols. This preliminary reconstruction serves as an intermediate step where robust multimodal spatial and textual features are extracted. In the following diffusion step, DiffJSCC uses the derived multimodal features, together with channel state information such as the signal-to-noise ratio (SNR) and channel gain, to guide the diffusion denoising process through a novel control module. To maintain the balance between realism and fidelity, an optional intermediate guidance approach using the initial image reconstruction is implemented. Extensive experiments on diverse datasets reveal that our method significantly surpasses prior deep JSCC approaches on both perceptual metrics and downstream task performance, showcasing its ability to preserve the semantics of the original transmitted images. Notably, DiffJSCC can achieve highly realistic reconstructions for <inline-formula> <tex-math>$768times 512$ </tex-math></inline-formula> pixel Kodak images with only 3072 symbols (<0.008>https://github.com/mingyuyng/DiffJSCC</uri>","PeriodicalId":100641,"journal":{"name":"IEEE Transactions on Machine Learning in Communications and Networking","volume":"3 ","pages":"1227-1243"},"PeriodicalIF":0.0,"publicationDate":"2025-11-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11224625","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145510073","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Erratum to “Dual Self-Attention is What You Need for Model Drift Detection in 6G Networks” 对“双重自关注是6G网络中模型漂移检测所需要的”的勘误
Pub Date : 2025-10-29 DOI: 10.1109/TMLCN.2025.3618993
Mazene Ameur;Bouziane Brik;Adlen Ksentini
Presents corrections to the paper, (Errata to “Dual Self-Attention is What You Need for Model Drift Detection in 6G Networks”).
提出了对论文的更正,(“双重自我注意是你需要在6G网络中进行模型漂移检测”的勘误表)。
{"title":"Erratum to “Dual Self-Attention is What You Need for Model Drift Detection in 6G Networks”","authors":"Mazene Ameur;Bouziane Brik;Adlen Ksentini","doi":"10.1109/TMLCN.2025.3618993","DOIUrl":"https://doi.org/10.1109/TMLCN.2025.3618993","url":null,"abstract":"Presents corrections to the paper, (Errata to “Dual Self-Attention is What You Need for Model Drift Detection in 6G Networks”).","PeriodicalId":100641,"journal":{"name":"IEEE Transactions on Machine Learning in Communications and Networking","volume":"3 ","pages":"1160-1160"},"PeriodicalIF":0.0,"publicationDate":"2025-10-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11220870","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145405438","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
SD-PPDDA: A Privacy Efficient Decentralized Dual Averaging Algorithm Over Networks SD-PPDDA:一种高效的网络上分散的双平均算法
Pub Date : 2025-10-24 DOI: 10.1109/TMLCN.2025.3625519
Qingguo Lü;Chenglong He;Keke Zhang;Huaqing Li;Tingwen Huang
This paper studies a decentralized online constrained optimization problem characterized by a shared constraint set. Nodes in the communication and learning network conduct local computations and communications to collaboratively solve the problem. Each node can access its own local cost function, whose value depends on its decision at each time step. However, because nodes continuously exchange privacy-sensitive information, most existing algorithms for this problem are susceptible to privacy leakage. To address this challenge, we propose an effective state-decomposition-based privacy-preserving decentralized dual averaging (SD-PPDDA) algorithm. The SD-PPDDA algorithm employs state decomposition scheme to preserve privacy without introducing additional hidden signals (may cause additional optimization errors) or incurring significant computational overhead. Theoretical analysis shows that the SD-PPDDA algorithm achieves the desired sublinear regret, specifically converging at a rate of $mathcal {O} left ({{ sqrt {K} }}right) $ (where $K$ denotes the number of iterations), while preserving the privacy of each node’s cost function. In addition, numerical simulations further validate the convergence and practicality of the algorithm.
研究了一个以共享约束集为特征的分散在线约束优化问题。通信学习网络中的节点进行局部计算和通信,协同解决问题。每个节点都可以访问自己的本地成本函数,其值取决于节点在每个时间步长的决策。然而,由于节点之间不断交换隐私敏感信息,大多数现有算法都容易出现隐私泄露。为了解决这一挑战,我们提出了一种有效的基于状态分解的隐私保护分散对偶平均(SD-PPDDA)算法。SD-PPDDA算法采用状态分解方案来保护隐私,而不会引入额外的隐藏信号(可能导致额外的优化错误)或产生大量的计算开销。理论分析表明,SD-PPDDA算法在保持各节点代价函数隐私性的同时,达到了期望的次线性后悔,收敛速度为$mathcal {O} left ({{ sqrt {K} }}right) $ ($K$表示迭代次数)。此外,数值仿真进一步验证了算法的收敛性和实用性。
{"title":"SD-PPDDA: A Privacy Efficient Decentralized Dual Averaging Algorithm Over Networks","authors":"Qingguo Lü;Chenglong He;Keke Zhang;Huaqing Li;Tingwen Huang","doi":"10.1109/TMLCN.2025.3625519","DOIUrl":"https://doi.org/10.1109/TMLCN.2025.3625519","url":null,"abstract":"This paper studies a decentralized online constrained optimization problem characterized by a shared constraint set. Nodes in the communication and learning network conduct local computations and communications to collaboratively solve the problem. Each node can access its own local cost function, whose value depends on its decision at each time step. However, because nodes continuously exchange privacy-sensitive information, most existing algorithms for this problem are susceptible to privacy leakage. To address this challenge, we propose an effective state-decomposition-based privacy-preserving decentralized dual averaging (SD-PPDDA) algorithm. The SD-PPDDA algorithm employs state decomposition scheme to preserve privacy without introducing additional hidden signals (may cause additional optimization errors) or incurring significant computational overhead. Theoretical analysis shows that the SD-PPDDA algorithm achieves the desired sublinear regret, specifically converging at a rate of <inline-formula> <tex-math>$mathcal {O} left ({{ sqrt {K} }}right) $ </tex-math></inline-formula> (where <inline-formula> <tex-math>$K$ </tex-math></inline-formula> denotes the number of iterations), while preserving the privacy of each node’s cost function. In addition, numerical simulations further validate the convergence and practicality of the algorithm.","PeriodicalId":100641,"journal":{"name":"IEEE Transactions on Machine Learning in Communications and Networking","volume":"3 ","pages":"1197-1209"},"PeriodicalIF":0.0,"publicationDate":"2025-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11217255","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145455964","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Anti-Jamming 5G Millimeter-Wave Communication via Joint Analog and Digital Beamforming: A Bayesian Optimization Approach 基于模拟和数字波束形成的5G毫米波通信抗干扰:贝叶斯优化方法
Pub Date : 2025-10-17 DOI: 10.1109/TMLCN.2025.3622593
Peihao Yan;Bowei Zhang;Shichen Zhang;Kai Zeng;Huacheng Zeng
5G millimeter-wave (mmWave) communications are essential for enabling ultra-high-speed, low-latency wireless connectivity to support data-intensive applications. However, the highly directional nature and sensitivity of mmWave signals make them particularly susceptible to jamming attacks. Therefore, securing 5G mmWave communication systems against jamming attacks is critical for ensuring reliable wireless connectivity in mission-critical applications. In this paper, we propose an online Bayesian Optimization (BayOpt) framework for joint analog and digital beamforming optimization at a mmWave communication device, aimed at maximizing its packet decoding rate under a constant jamming attack. By modeling the optimization objective as a black-box function and leveraging online learning to guide beam search, the BayOpt framework efficiently identifies near-optimal beam configurations in both the analog and digital domains while not requiring any knowledge of the jamming strategy or channel conditions. We have implemented the proposed anti-jamming solution on a 28 GHz mmWave testbed and conducted extensive evaluations across four distinct jamming scenarios. Over-the-air experiments demonstrate the effectiveness of the BayOpt framework in suppressing jamming interference. Notably, in a scenario where the jamming signal is 10 dB stronger than the desired signal, the BayOpt-enabled mmWave receiver achieves 73% of the throughput observed in a jamming-free environment.
5G毫米波(mmWave)通信对于实现超高速、低延迟无线连接以支持数据密集型应用至关重要。然而,毫米波信号的高度方向性和灵敏度使它们特别容易受到干扰攻击。因此,确保5G毫米波通信系统免受干扰攻击对于确保关键任务应用中的可靠无线连接至关重要。在本文中,我们提出了一个在线贝叶斯优化(BayOpt)框架,用于毫米波通信设备的模拟和数字波束形成联合优化,旨在最大化其在持续干扰攻击下的数据包解码率。通过将优化目标建模为黑盒函数,并利用在线学习来指导波束搜索,BayOpt框架在不需要任何干扰策略或信道条件知识的情况下,有效地识别模拟和数字域的近最佳波束配置。我们已经在28 GHz毫米波测试台上实现了所提出的抗干扰解决方案,并在四种不同的干扰场景下进行了广泛的评估。空中实验证明了BayOpt框架在抑制干扰干扰方面的有效性。值得注意的是,在干扰信号比期望信号强10db的情况下,启用bayopt的毫米波接收器的吞吐量达到无干扰环境中观察到的73%。
{"title":"Anti-Jamming 5G Millimeter-Wave Communication via Joint Analog and Digital Beamforming: A Bayesian Optimization Approach","authors":"Peihao Yan;Bowei Zhang;Shichen Zhang;Kai Zeng;Huacheng Zeng","doi":"10.1109/TMLCN.2025.3622593","DOIUrl":"https://doi.org/10.1109/TMLCN.2025.3622593","url":null,"abstract":"5G millimeter-wave (mmWave) communications are essential for enabling ultra-high-speed, low-latency wireless connectivity to support data-intensive applications. However, the highly directional nature and sensitivity of mmWave signals make them particularly susceptible to jamming attacks. Therefore, securing 5G mmWave communication systems against jamming attacks is critical for ensuring reliable wireless connectivity in mission-critical applications. In this paper, we propose an online Bayesian Optimization (BayOpt) framework for joint analog and digital beamforming optimization at a mmWave communication device, aimed at maximizing its packet decoding rate under a constant jamming attack. By modeling the optimization objective as a black-box function and leveraging online learning to guide beam search, the BayOpt framework efficiently identifies near-optimal beam configurations in both the analog and digital domains while not requiring any knowledge of the jamming strategy or channel conditions. We have implemented the proposed anti-jamming solution on a 28 GHz mmWave testbed and conducted extensive evaluations across four distinct jamming scenarios. Over-the-air experiments demonstrate the effectiveness of the BayOpt framework in suppressing jamming interference. Notably, in a scenario where the jamming signal is 10 dB stronger than the desired signal, the BayOpt-enabled mmWave receiver achieves 73% of the throughput observed in a jamming-free environment.","PeriodicalId":100641,"journal":{"name":"IEEE Transactions on Machine Learning in Communications and Networking","volume":"3 ","pages":"1161-1177"},"PeriodicalIF":0.0,"publicationDate":"2025-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11206744","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145405439","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
IEEE Transactions on Machine Learning in Communications and Networking
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1