首页 > 最新文献

IEEE Transactions on Machine Learning in Communications and Networking最新文献

英文 中文
Task-Specific Sharpness-Aware O-RAN Resource Management Using Multi-Agent Reinforcement Learning 基于多智能体强化学习的特定任务锐度感知O-RAN资源管理
Pub Date : 2025-11-19 DOI: 10.1109/TMLCN.2025.3634994
Fatemeh Lotfi;Hossein Rajoli;Fatemeh Afghah
Next-generation networks utilize the Open Radio Access Network (O-RAN) architecture to enable dynamic resource management, facilitated by the RAN Intelligent Controller (RIC). While deep reinforcement learning (DRL) models show promise in optimizing network resources, they often struggle with robustness and generalizability in dynamic environments. This paper introduces a novel resource management approach that enhances the Soft Actor Critic (SAC) algorithm with Sharpness-Aware Minimization (SAM) in a distributed Multi-Agent RL (MARL) framework. Our method introduces an adaptive and selective SAM mechanism, where regularization is explicitly driven by temporal-difference (TD)-error variance, ensuring that only agents facing high environmental complexity are regularized. This targeted strategy reduces unnecessary overhead, improves training stability, and enhances generalization without sacrificing learning efficiency. We further incorporate a dynamic $rho $ scheduling scheme to refine the exploration-exploitation trade-off across agents. Experimental results show our method significantly outperforms conventional DRL approaches, yielding up to a 22% improvement in resource allocation efficiency and ensuring superior QoS satisfaction across diverse O-RAN slices.
下一代网络利用开放无线接入网(O-RAN)架构实现动态资源管理,由RAN智能控制器(RIC)提供便利。虽然深度强化学习(DRL)模型在优化网络资源方面表现出了希望,但它们在动态环境中经常与鲁棒性和泛化性作斗争。本文介绍了一种新的资源管理方法,该方法在分布式多智能体RL (MARL)框架中使用锐度感知最小化(SAM)来增强软行为者评价(SAC)算法。我们的方法引入了一种自适应和选择性SAM机制,其中正则化是由时间差(TD)误差方差明确驱动的,确保只有面对高环境复杂性的代理才被正则化。这种有针对性的策略减少了不必要的开销,提高了训练稳定性,并在不牺牲学习效率的情况下增强了泛化。我们进一步结合了一个动态的$rho $调度方案,以改进跨代理的探索-开发权衡。实验结果表明,我们的方法显著优于传统的DRL方法,在资源分配效率上提高了22%,并在不同的O-RAN切片中确保了更高的QoS满意度。
{"title":"Task-Specific Sharpness-Aware O-RAN Resource Management Using Multi-Agent Reinforcement Learning","authors":"Fatemeh Lotfi;Hossein Rajoli;Fatemeh Afghah","doi":"10.1109/TMLCN.2025.3634994","DOIUrl":"https://doi.org/10.1109/TMLCN.2025.3634994","url":null,"abstract":"Next-generation networks utilize the Open Radio Access Network (O-RAN) architecture to enable dynamic resource management, facilitated by the RAN Intelligent Controller (RIC). While deep reinforcement learning (DRL) models show promise in optimizing network resources, they often struggle with robustness and generalizability in dynamic environments. This paper introduces a novel resource management approach that enhances the Soft Actor Critic (SAC) algorithm with Sharpness-Aware Minimization (SAM) in a distributed Multi-Agent RL (MARL) framework. Our method introduces an adaptive and selective SAM mechanism, where regularization is explicitly driven by temporal-difference (TD)-error variance, ensuring that only agents facing high environmental complexity are regularized. This targeted strategy reduces unnecessary overhead, improves training stability, and enhances generalization without sacrificing learning efficiency. We further incorporate a dynamic <inline-formula> <tex-math>$rho $ </tex-math></inline-formula> scheduling scheme to refine the exploration-exploitation trade-off across agents. Experimental results show our method significantly outperforms conventional DRL approaches, yielding up to a 22% improvement in resource allocation efficiency and ensuring superior QoS satisfaction across diverse O-RAN slices.","PeriodicalId":100641,"journal":{"name":"IEEE Transactions on Machine Learning in Communications and Networking","volume":"4 ","pages":"98-114"},"PeriodicalIF":0.0,"publicationDate":"2025-11-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11260483","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145778407","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Reinforcement Learning Framework for Resource Allocation in Uplink Carrier Aggregation in the Presence of Self Interference 存在自干扰的上行载波聚合资源分配的强化学习框架
Pub Date : 2025-11-14 DOI: 10.1109/TMLCN.2025.3633248
Jaswanth Bodempudi;Batta Siva Sairam;Madepalli Haritha;Sandesh Rao Mattu;Ananthanarayanan Chockalingam
To meet the ever-increasing demand for higher data rates in mobile networks across generations, many novel schemes have been proposed in the standards. One such scheme is carrier aggregation (CA). Simply put, CA is a technique that allows mobile networks to combine multiple carriers to increase data rate and improve network efficiency. On the uplink, for power constrained users, this translates to the need for an efficient resource allocation scheme, where each user distributes its available power among its assigned uplink carriers. Choosing a good set of carriers and allocating appropriate power on the carriers is of paramount importance for good performance. Another factor that is critical to obtaining good performance is how well the degradation caused by the harmonic/intermodulation terms generated by the user’s transmitter non-linearities is handled. Specifically, for example, if the carrier allocation is such that a harmonic of a user’s uplink carrier falls on the downlink frequency of that user, it leads to a self coupling-induced sensitivity degradation of that user’s downlink receiver. Considering these factors, in this paper, we model the uplink carrier aggregation problem as an optimal resource allocation problem with the associated constraints of non-linearities induced self interference (SI). This involves optimization over a discrete variable (which carriers need to be turned on) and a continuous variable (what power needs to be allocated on the selected carriers) in dynamic environments, a problem which is hard to solve using traditional methods owing to the mixed nature of the optimization variables and the additional need to consider the SI constraint in the problem. Therefore, in this paper, we adopt a reinforcement learning (RL) framework involving a compound-action actor-critic (CA2C) algorithm for the uplink carrier aggregation problem. We propose a novel reward function that is critical for enabling the proposed CA2C algorithm to efficiently handle SI. The CA2C algorithm along with the proposed reward function learns to assign and activate suitable carriers in an online fashion. Numerical results demonstrate that the proposed RL based scheme is able to achieve higher sum throughputs compared to naive schemes. The results also demonstrate that the proposed reward function allows the CA2C algorithm to adapt the optimization both in the presence and absence of SI.
为了满足移动网络中不断增长的跨代数据传输速率需求,标准中提出了许多新颖的方案。其中一种方案是载波聚合(CA)。简单地说,CA是一种允许移动网络组合多个运营商以提高数据速率和提高网络效率的技术。在上行链路上,对于功率受限的用户,这意味着需要一种有效的资源分配方案,其中每个用户将其可用功率分配给其分配的上行链路载波。选择一组好的载波并在载波上分配适当的功率对于获得良好的性能至关重要。另一个对获得良好性能至关重要的因素是如何处理由用户发射机非线性产生的谐波/互调项引起的退化。具体来说,例如,如果载波分配使得用户上行载波的谐波落在该用户的下行频率上,则会导致该用户下行接收器的自耦合引起的灵敏度下降。考虑到这些因素,本文将上行载波聚合问题建模为具有非线性诱导自干扰约束的最优资源分配问题。这涉及在动态环境中对离散变量(需要打开哪些载波)和连续变量(需要在选定的载波上分配哪些功率)进行优化,由于优化变量的混合性质以及考虑问题中的SI约束的额外需要,使用传统方法很难解决这个问题。因此,在本文中,我们采用强化学习(RL)框架,其中涉及复合动作actor-critic (CA2C)算法来解决上行载波聚合问题。我们提出了一种新的奖励函数,它对于使所提出的CA2C算法有效地处理SI至关重要。CA2C算法与所提出的奖励函数一起学习在线方式分配和激活合适的载体。数值结果表明,与原始方案相比,基于强化学习的方案能够获得更高的总吞吐量。结果还表明,所提出的奖励函数允许CA2C算法在存在和不存在SI的情况下都能适应优化。
{"title":"A Reinforcement Learning Framework for Resource Allocation in Uplink Carrier Aggregation in the Presence of Self Interference","authors":"Jaswanth Bodempudi;Batta Siva Sairam;Madepalli Haritha;Sandesh Rao Mattu;Ananthanarayanan Chockalingam","doi":"10.1109/TMLCN.2025.3633248","DOIUrl":"https://doi.org/10.1109/TMLCN.2025.3633248","url":null,"abstract":"To meet the ever-increasing demand for higher data rates in mobile networks across generations, many novel schemes have been proposed in the standards. One such scheme is carrier aggregation (CA). Simply put, CA is a technique that allows mobile networks to combine multiple carriers to increase data rate and improve network efficiency. On the uplink, for power constrained users, this translates to the need for an efficient resource allocation scheme, where each user distributes its available power among its assigned uplink carriers. Choosing a good set of carriers and allocating appropriate power on the carriers is of paramount importance for good performance. Another factor that is critical to obtaining good performance is how well the degradation caused by the harmonic/intermodulation terms generated by the user’s transmitter non-linearities is handled. Specifically, for example, if the carrier allocation is such that a harmonic of a user’s uplink carrier falls on the downlink frequency of that user, it leads to a self coupling-induced sensitivity degradation of that user’s downlink receiver. Considering these factors, in this paper, we model the uplink carrier aggregation problem as an optimal resource allocation problem with the associated constraints of non-linearities induced self interference (SI). This involves optimization over a discrete variable (which carriers need to be turned on) and a continuous variable (what power needs to be allocated on the selected carriers) in dynamic environments, a problem which is hard to solve using traditional methods owing to the mixed nature of the optimization variables and the additional need to consider the SI constraint in the problem. Therefore, in this paper, we adopt a reinforcement learning (RL) framework involving a compound-action actor-critic (CA2C) algorithm for the uplink carrier aggregation problem. We propose a novel reward function that is critical for enabling the proposed CA2C algorithm to efficiently handle SI. The CA2C algorithm along with the proposed reward function learns to assign and activate suitable carriers in an online fashion. Numerical results demonstrate that the proposed RL based scheme is able to achieve higher sum throughputs compared to naive schemes. The results also demonstrate that the proposed reward function allows the CA2C algorithm to adapt the optimization both in the presence and absence of SI.","PeriodicalId":100641,"journal":{"name":"IEEE Transactions on Machine Learning in Communications and Networking","volume":"3 ","pages":"1265-1286"},"PeriodicalIF":0.0,"publicationDate":"2025-11-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11248959","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145729284","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Toward Autonomous and Efficient Cybersecurity: A Multi-Objective AutoML-Based Intrusion Detection System 迈向自主高效的网络安全:一种多目标自动入侵检测系统
Pub Date : 2025-11-11 DOI: 10.1109/TMLCN.2025.3631379
Li Yang;Abdallah Shami
With increasingly sophisticated cybersecurity threats and rising demand for network automation, autonomous cybersecurity mechanisms are becoming critical for securing modern networks. The rapid expansion of Internet of Things (IoT) systems amplifies these challenges, as resource-constrained IoT devices demand scalable and efficient security solutions. In this work, an innovative Intrusion Detection System (IDS) utilizing Automated Machine Learning (AutoML) and Multi-Objective Optimization (MOO) is proposed for autonomous and optimized cyber-attack detection in modern networking environments. The proposed IDS framework integrates two primary innovative techniques: Optimized Importance and Percentage-based Automated Feature Selection (OIP-AutoFS) and Optimized Performance, Confidence, and Efficiency-based Combined Algorithm Selection and Hyperparameter Optimization (OPCE-CASH). These components optimize feature selection and model learning processes to strike a balance between intrusion detection effectiveness and computational efficiency. This work presents the first IDS framework that integrates all four AutoML stages and employs multi-objective optimization to jointly optimize detection effectiveness, efficiency, and confidence for deployment in resource-constrained systems. Experimental evaluations over two benchmark cybersecurity datasets demonstrate that the proposed MOO-AutoML IDS outperforms state-of-the-art IDSs, establishing a new benchmark for autonomous, efficient, and optimized security for networks. Designed to support IoT and edge environments with resource constraints, the proposed framework is applicable to a variety of autonomous cybersecurity applications across diverse networked environments.
随着网络安全威胁的日益复杂和对网络自动化需求的不断增长,自主网络安全机制对现代网络的安全至关重要。物联网(IoT)系统的快速扩展放大了这些挑战,因为资源有限的物联网设备需要可扩展且高效的安全解决方案。在这项工作中,提出了一种利用自动机器学习(AutoML)和多目标优化(MOO)的创新入侵检测系统(IDS),用于现代网络环境中自主和优化的网络攻击检测。提出的IDS框架集成了两种主要的创新技术:基于优化重要性和百分比的自动特征选择(OIP-AutoFS)和基于优化性能、置信度和效率的组合算法选择和超参数优化(OPCE-CASH)。这些组件优化特征选择和模型学习过程,在入侵检测有效性和计算效率之间取得平衡。这项工作提出了第一个集成了所有四个AutoML阶段的IDS框架,并采用多目标优化来共同优化在资源受限系统中部署的检测有效性、效率和信心。在两个基准网络安全数据集上的实验评估表明,所提出的MOO-AutoML IDS优于最先进的IDS,为网络的自主、高效和优化安全性建立了新的基准。该框架旨在支持具有资源约束的物联网和边缘环境,适用于不同网络环境中的各种自主网络安全应用。
{"title":"Toward Autonomous and Efficient Cybersecurity: A Multi-Objective AutoML-Based Intrusion Detection System","authors":"Li Yang;Abdallah Shami","doi":"10.1109/TMLCN.2025.3631379","DOIUrl":"https://doi.org/10.1109/TMLCN.2025.3631379","url":null,"abstract":"With increasingly sophisticated cybersecurity threats and rising demand for network automation, autonomous cybersecurity mechanisms are becoming critical for securing modern networks. The rapid expansion of Internet of Things (IoT) systems amplifies these challenges, as resource-constrained IoT devices demand scalable and efficient security solutions. In this work, an innovative Intrusion Detection System (IDS) utilizing Automated Machine Learning (AutoML) and Multi-Objective Optimization (MOO) is proposed for autonomous and optimized cyber-attack detection in modern networking environments. The proposed IDS framework integrates two primary innovative techniques: Optimized Importance and Percentage-based Automated Feature Selection (OIP-AutoFS) and Optimized Performance, Confidence, and Efficiency-based Combined Algorithm Selection and Hyperparameter Optimization (OPCE-CASH). These components optimize feature selection and model learning processes to strike a balance between intrusion detection effectiveness and computational efficiency. This work presents the first IDS framework that integrates all four AutoML stages and employs multi-objective optimization to jointly optimize detection effectiveness, efficiency, and confidence for deployment in resource-constrained systems. Experimental evaluations over two benchmark cybersecurity datasets demonstrate that the proposed MOO-AutoML IDS outperforms state-of-the-art IDSs, establishing a new benchmark for autonomous, efficient, and optimized security for networks. Designed to support IoT and edge environments with resource constraints, the proposed framework is applicable to a variety of autonomous cybersecurity applications across diverse networked environments.","PeriodicalId":100641,"journal":{"name":"IEEE Transactions on Machine Learning in Communications and Networking","volume":"3 ","pages":"1244-1264"},"PeriodicalIF":0.0,"publicationDate":"2025-11-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11240569","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145560635","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Generation of Orthogonal THz Pulses for Wireless Communications Based on Diffractive Autoencoder Neural Networks 基于衍射自编码器神经网络的无线通信正交太赫兹脉冲的产生
Pub Date : 2025-11-07 DOI: 10.1109/TMLCN.2025.3630589
Xin Wang;Xudong Wang
Pulse-based systems provide a promising alternative to terahertz (THz) communications, especially for joint communication and sensing applications. For such a system, THz Gaussian pulses are the fundamental and commonly used waveforms, but are susceptible to distortion due to their large bandwidth occupation. A critical yet unresolved issue is generating THz pulses with tunable center frequencies and bandwidths. In this paper, a THz-pulse generator is designed based on diffractive surfaces cascaded in multiple layers. Given the THz Gaussian pulse as an input, each surface has the ability to change its amplitude and phase and diffracts it to the next surface, so as to generate pulses with expected frequencies and bandwidths. To determine parameters for millions of elements on all surfaces and to handle the case of generating multiple THz pulses corresponding to the same THz Gaussian input signal, a diffractive autoencoder neural network (DANN) is developed. Subsequently, using the generated pulses for data transmission under THz channel, the symbol error rate (SER) performance is analyzed. Extensive simulations are conducted to validate and evaluate the DANN-based THz pulse generator. Additionally, using just 5 diffractive surfaces, the generator can support at least 10 pairs of orthogonal THz pulses with a correlation error ratio of less than $10^{-1}$ .
基于脉冲的系统为太赫兹(THz)通信提供了一个有前途的替代方案,特别是在联合通信和传感应用中。对于这样的系统,太赫兹高斯脉冲是基本的和常用的波形,但由于其大带宽占用而容易失真。一个关键但尚未解决的问题是产生具有可调谐中心频率和带宽的太赫兹脉冲。本文设计了一种基于衍射面多层级联的太赫兹脉冲发生器。给定太赫兹高斯脉冲作为输入,每个表面都有能力改变其振幅和相位,并将其衍射到下一个表面,从而产生具有预期频率和带宽的脉冲。为了确定所有表面上数百万个元件的参数,并处理产生与同一太赫兹高斯输入信号对应的多个太赫兹脉冲的情况,开发了一种衍射自编码器神经网络(DANN)。然后,利用生成的脉冲在太赫兹信道下进行数据传输,分析了码元误码率(SER)性能。进行了大量的仿真来验证和评估基于dann的太赫兹脉冲发生器。此外,仅使用5个衍射面,发生器可以支持至少10对正交太赫兹脉冲,相关误差比小于$10^{-1}$。
{"title":"Generation of Orthogonal THz Pulses for Wireless Communications Based on Diffractive Autoencoder Neural Networks","authors":"Xin Wang;Xudong Wang","doi":"10.1109/TMLCN.2025.3630589","DOIUrl":"https://doi.org/10.1109/TMLCN.2025.3630589","url":null,"abstract":"Pulse-based systems provide a promising alternative to terahertz (THz) communications, especially for joint communication and sensing applications. For such a system, THz Gaussian pulses are the fundamental and commonly used waveforms, but are susceptible to distortion due to their large bandwidth occupation. A critical yet unresolved issue is generating THz pulses with tunable center frequencies and bandwidths. In this paper, a THz-pulse generator is designed based on diffractive surfaces cascaded in multiple layers. Given the THz Gaussian pulse as an input, each surface has the ability to change its amplitude and phase and diffracts it to the next surface, so as to generate pulses with expected frequencies and bandwidths. To determine parameters for millions of elements on all surfaces and to handle the case of generating multiple THz pulses corresponding to the same THz Gaussian input signal, a diffractive autoencoder neural network (DANN) is developed. Subsequently, using the generated pulses for data transmission under THz channel, the symbol error rate (SER) performance is analyzed. Extensive simulations are conducted to validate and evaluate the DANN-based THz pulse generator. Additionally, using just 5 diffractive surfaces, the generator can support at least 10 pairs of orthogonal THz pulses with a correlation error ratio of less than <inline-formula> <tex-math>$10^{-1}$ </tex-math></inline-formula>.","PeriodicalId":100641,"journal":{"name":"IEEE Transactions on Machine Learning in Communications and Networking","volume":"3 ","pages":"1210-1226"},"PeriodicalIF":0.0,"publicationDate":"2025-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11231335","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145510074","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Diffusion-Aided Joint Source Channel Coding for High Realism Wireless Image Transmission 高真实感无线图像传输的扩散辅助联合源信道编码
Pub Date : 2025-11-03 DOI: 10.1109/TMLCN.2025.3628535
Mingyu Yang;Bowen Liu;Boyang Wang;Hun-Seok Kim
Deep learning-based joint source-channel coding (deep JSCC) has been demonstrated to be an effective approach for wireless image transmission. However, many current approaches utilize an autoencoder framework to optimize conventional metrics such as Mean Squared Error (MSE) and Structural Similarity Index (SSIM), which are inadequate for preserving the perceptual quality of reconstructed images. Such an issue is more prominent under stringent bandwidth constraints or low signal-to-noise ratio (SNR) conditions. To tackle this challenge, we propose DiffJSCC, a novel framework that leverages the prior knowledge of the pre-trained Stable Diffusion model to produce high-realism images via the conditional diffusion denoising process. First, our DiffJSCC employs an autoencoder structure similar to prior deep JSCC works to generate an initial image reconstruction from the noisy channel symbols. This preliminary reconstruction serves as an intermediate step where robust multimodal spatial and textual features are extracted. In the following diffusion step, DiffJSCC uses the derived multimodal features, together with channel state information such as the signal-to-noise ratio (SNR) and channel gain, to guide the diffusion denoising process through a novel control module. To maintain the balance between realism and fidelity, an optional intermediate guidance approach using the initial image reconstruction is implemented. Extensive experiments on diverse datasets reveal that our method significantly surpasses prior deep JSCC approaches on both perceptual metrics and downstream task performance, showcasing its ability to preserve the semantics of the original transmitted images. Notably, DiffJSCC can achieve highly realistic reconstructions for $768times 512$ pixel Kodak images with only 3072 symbols (<0.008>https://github.com/mingyuyng/DiffJSCC
基于深度学习的联合源信道编码(Deep JSCC)已被证明是一种有效的无线图像传输方法。然而,许多当前的方法利用自编码器框架来优化传统的度量,如均方误差(MSE)和结构相似性指数(SSIM),这不足以保持重建图像的感知质量。在严格的带宽限制或低信噪比(SNR)条件下,这个问题更加突出。为了解决这一挑战,我们提出了DiffJSCC,这是一个新的框架,它利用预训练的稳定扩散模型的先验知识,通过条件扩散去噪过程产生高真实感的图像。首先,我们的DiffJSCC采用了类似于之前的深度JSCC工作的自动编码器结构,从噪声通道符号生成初始图像重建。这种初步的重建是提取多模态空间和文本特征的中间步骤。在接下来的扩散步骤中,DiffJSCC使用衍生的多模态特征,以及信道状态信息(如信噪比(SNR)和信道增益),通过一个新的控制模块来指导扩散去噪过程。为了保持真实感和保真度之间的平衡,使用初始图像重建实现了一种可选的中间制导方法。在不同数据集上进行的大量实验表明,我们的方法在感知指标和下游任务性能方面都明显优于之前的深度JSCC方法,展示了其保留原始传输图像语义的能力。值得注意的是,DiffJSCC可以实现高度逼真的重建$768 × 512$像素的柯达图像,只有3072个符号(https://github.com/mingyuyng/DiffJSCC
{"title":"Diffusion-Aided Joint Source Channel Coding for High Realism Wireless Image Transmission","authors":"Mingyu Yang;Bowen Liu;Boyang Wang;Hun-Seok Kim","doi":"10.1109/TMLCN.2025.3628535","DOIUrl":"https://doi.org/10.1109/TMLCN.2025.3628535","url":null,"abstract":"Deep learning-based joint source-channel coding (deep JSCC) has been demonstrated to be an effective approach for wireless image transmission. However, many current approaches utilize an autoencoder framework to optimize conventional metrics such as Mean Squared Error (MSE) and Structural Similarity Index (SSIM), which are inadequate for preserving the perceptual quality of reconstructed images. Such an issue is more prominent under stringent bandwidth constraints or low signal-to-noise ratio (SNR) conditions. To tackle this challenge, we propose DiffJSCC, a novel framework that leverages the prior knowledge of the pre-trained Stable Diffusion model to produce high-realism images via the conditional diffusion denoising process. First, our DiffJSCC employs an autoencoder structure similar to prior deep JSCC works to generate an initial image reconstruction from the noisy channel symbols. This preliminary reconstruction serves as an intermediate step where robust multimodal spatial and textual features are extracted. In the following diffusion step, DiffJSCC uses the derived multimodal features, together with channel state information such as the signal-to-noise ratio (SNR) and channel gain, to guide the diffusion denoising process through a novel control module. To maintain the balance between realism and fidelity, an optional intermediate guidance approach using the initial image reconstruction is implemented. Extensive experiments on diverse datasets reveal that our method significantly surpasses prior deep JSCC approaches on both perceptual metrics and downstream task performance, showcasing its ability to preserve the semantics of the original transmitted images. Notably, DiffJSCC can achieve highly realistic reconstructions for <inline-formula> <tex-math>$768times 512$ </tex-math></inline-formula> pixel Kodak images with only 3072 symbols (<0.008>https://github.com/mingyuyng/DiffJSCC</uri>","PeriodicalId":100641,"journal":{"name":"IEEE Transactions on Machine Learning in Communications and Networking","volume":"3 ","pages":"1227-1243"},"PeriodicalIF":0.0,"publicationDate":"2025-11-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11224625","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145510073","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Erratum to “Dual Self-Attention is What You Need for Model Drift Detection in 6G Networks” 对“双重自关注是6G网络中模型漂移检测所需要的”的勘误
Pub Date : 2025-10-29 DOI: 10.1109/TMLCN.2025.3618993
Mazene Ameur;Bouziane Brik;Adlen Ksentini
Presents corrections to the paper, (Errata to “Dual Self-Attention is What You Need for Model Drift Detection in 6G Networks”).
提出了对论文的更正,(“双重自我注意是你需要在6G网络中进行模型漂移检测”的勘误表)。
{"title":"Erratum to “Dual Self-Attention is What You Need for Model Drift Detection in 6G Networks”","authors":"Mazene Ameur;Bouziane Brik;Adlen Ksentini","doi":"10.1109/TMLCN.2025.3618993","DOIUrl":"https://doi.org/10.1109/TMLCN.2025.3618993","url":null,"abstract":"Presents corrections to the paper, (Errata to “Dual Self-Attention is What You Need for Model Drift Detection in 6G Networks”).","PeriodicalId":100641,"journal":{"name":"IEEE Transactions on Machine Learning in Communications and Networking","volume":"3 ","pages":"1160-1160"},"PeriodicalIF":0.0,"publicationDate":"2025-10-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11220870","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145405438","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
SD-PPDDA: A Privacy Efficient Decentralized Dual Averaging Algorithm Over Networks SD-PPDDA:一种高效的网络上分散的双平均算法
Pub Date : 2025-10-24 DOI: 10.1109/TMLCN.2025.3625519
Qingguo Lü;Chenglong He;Keke Zhang;Huaqing Li;Tingwen Huang
This paper studies a decentralized online constrained optimization problem characterized by a shared constraint set. Nodes in the communication and learning network conduct local computations and communications to collaboratively solve the problem. Each node can access its own local cost function, whose value depends on its decision at each time step. However, because nodes continuously exchange privacy-sensitive information, most existing algorithms for this problem are susceptible to privacy leakage. To address this challenge, we propose an effective state-decomposition-based privacy-preserving decentralized dual averaging (SD-PPDDA) algorithm. The SD-PPDDA algorithm employs state decomposition scheme to preserve privacy without introducing additional hidden signals (may cause additional optimization errors) or incurring significant computational overhead. Theoretical analysis shows that the SD-PPDDA algorithm achieves the desired sublinear regret, specifically converging at a rate of $mathcal {O} left ({{ sqrt {K} }}right) $ (where $K$ denotes the number of iterations), while preserving the privacy of each node’s cost function. In addition, numerical simulations further validate the convergence and practicality of the algorithm.
研究了一个以共享约束集为特征的分散在线约束优化问题。通信学习网络中的节点进行局部计算和通信,协同解决问题。每个节点都可以访问自己的本地成本函数,其值取决于节点在每个时间步长的决策。然而,由于节点之间不断交换隐私敏感信息,大多数现有算法都容易出现隐私泄露。为了解决这一挑战,我们提出了一种有效的基于状态分解的隐私保护分散对偶平均(SD-PPDDA)算法。SD-PPDDA算法采用状态分解方案来保护隐私,而不会引入额外的隐藏信号(可能导致额外的优化错误)或产生大量的计算开销。理论分析表明,SD-PPDDA算法在保持各节点代价函数隐私性的同时,达到了期望的次线性后悔,收敛速度为$mathcal {O} left ({{ sqrt {K} }}right) $ ($K$表示迭代次数)。此外,数值仿真进一步验证了算法的收敛性和实用性。
{"title":"SD-PPDDA: A Privacy Efficient Decentralized Dual Averaging Algorithm Over Networks","authors":"Qingguo Lü;Chenglong He;Keke Zhang;Huaqing Li;Tingwen Huang","doi":"10.1109/TMLCN.2025.3625519","DOIUrl":"https://doi.org/10.1109/TMLCN.2025.3625519","url":null,"abstract":"This paper studies a decentralized online constrained optimization problem characterized by a shared constraint set. Nodes in the communication and learning network conduct local computations and communications to collaboratively solve the problem. Each node can access its own local cost function, whose value depends on its decision at each time step. However, because nodes continuously exchange privacy-sensitive information, most existing algorithms for this problem are susceptible to privacy leakage. To address this challenge, we propose an effective state-decomposition-based privacy-preserving decentralized dual averaging (SD-PPDDA) algorithm. The SD-PPDDA algorithm employs state decomposition scheme to preserve privacy without introducing additional hidden signals (may cause additional optimization errors) or incurring significant computational overhead. Theoretical analysis shows that the SD-PPDDA algorithm achieves the desired sublinear regret, specifically converging at a rate of <inline-formula> <tex-math>$mathcal {O} left ({{ sqrt {K} }}right) $ </tex-math></inline-formula> (where <inline-formula> <tex-math>$K$ </tex-math></inline-formula> denotes the number of iterations), while preserving the privacy of each node’s cost function. In addition, numerical simulations further validate the convergence and practicality of the algorithm.","PeriodicalId":100641,"journal":{"name":"IEEE Transactions on Machine Learning in Communications and Networking","volume":"3 ","pages":"1197-1209"},"PeriodicalIF":0.0,"publicationDate":"2025-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11217255","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145455964","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Anti-Jamming 5G Millimeter-Wave Communication via Joint Analog and Digital Beamforming: A Bayesian Optimization Approach 基于模拟和数字波束形成的5G毫米波通信抗干扰:贝叶斯优化方法
Pub Date : 2025-10-17 DOI: 10.1109/TMLCN.2025.3622593
Peihao Yan;Bowei Zhang;Shichen Zhang;Kai Zeng;Huacheng Zeng
5G millimeter-wave (mmWave) communications are essential for enabling ultra-high-speed, low-latency wireless connectivity to support data-intensive applications. However, the highly directional nature and sensitivity of mmWave signals make them particularly susceptible to jamming attacks. Therefore, securing 5G mmWave communication systems against jamming attacks is critical for ensuring reliable wireless connectivity in mission-critical applications. In this paper, we propose an online Bayesian Optimization (BayOpt) framework for joint analog and digital beamforming optimization at a mmWave communication device, aimed at maximizing its packet decoding rate under a constant jamming attack. By modeling the optimization objective as a black-box function and leveraging online learning to guide beam search, the BayOpt framework efficiently identifies near-optimal beam configurations in both the analog and digital domains while not requiring any knowledge of the jamming strategy or channel conditions. We have implemented the proposed anti-jamming solution on a 28 GHz mmWave testbed and conducted extensive evaluations across four distinct jamming scenarios. Over-the-air experiments demonstrate the effectiveness of the BayOpt framework in suppressing jamming interference. Notably, in a scenario where the jamming signal is 10 dB stronger than the desired signal, the BayOpt-enabled mmWave receiver achieves 73% of the throughput observed in a jamming-free environment.
5G毫米波(mmWave)通信对于实现超高速、低延迟无线连接以支持数据密集型应用至关重要。然而,毫米波信号的高度方向性和灵敏度使它们特别容易受到干扰攻击。因此,确保5G毫米波通信系统免受干扰攻击对于确保关键任务应用中的可靠无线连接至关重要。在本文中,我们提出了一个在线贝叶斯优化(BayOpt)框架,用于毫米波通信设备的模拟和数字波束形成联合优化,旨在最大化其在持续干扰攻击下的数据包解码率。通过将优化目标建模为黑盒函数,并利用在线学习来指导波束搜索,BayOpt框架在不需要任何干扰策略或信道条件知识的情况下,有效地识别模拟和数字域的近最佳波束配置。我们已经在28 GHz毫米波测试台上实现了所提出的抗干扰解决方案,并在四种不同的干扰场景下进行了广泛的评估。空中实验证明了BayOpt框架在抑制干扰干扰方面的有效性。值得注意的是,在干扰信号比期望信号强10db的情况下,启用bayopt的毫米波接收器的吞吐量达到无干扰环境中观察到的73%。
{"title":"Anti-Jamming 5G Millimeter-Wave Communication via Joint Analog and Digital Beamforming: A Bayesian Optimization Approach","authors":"Peihao Yan;Bowei Zhang;Shichen Zhang;Kai Zeng;Huacheng Zeng","doi":"10.1109/TMLCN.2025.3622593","DOIUrl":"https://doi.org/10.1109/TMLCN.2025.3622593","url":null,"abstract":"5G millimeter-wave (mmWave) communications are essential for enabling ultra-high-speed, low-latency wireless connectivity to support data-intensive applications. However, the highly directional nature and sensitivity of mmWave signals make them particularly susceptible to jamming attacks. Therefore, securing 5G mmWave communication systems against jamming attacks is critical for ensuring reliable wireless connectivity in mission-critical applications. In this paper, we propose an online Bayesian Optimization (BayOpt) framework for joint analog and digital beamforming optimization at a mmWave communication device, aimed at maximizing its packet decoding rate under a constant jamming attack. By modeling the optimization objective as a black-box function and leveraging online learning to guide beam search, the BayOpt framework efficiently identifies near-optimal beam configurations in both the analog and digital domains while not requiring any knowledge of the jamming strategy or channel conditions. We have implemented the proposed anti-jamming solution on a 28 GHz mmWave testbed and conducted extensive evaluations across four distinct jamming scenarios. Over-the-air experiments demonstrate the effectiveness of the BayOpt framework in suppressing jamming interference. Notably, in a scenario where the jamming signal is 10 dB stronger than the desired signal, the BayOpt-enabled mmWave receiver achieves 73% of the throughput observed in a jamming-free environment.","PeriodicalId":100641,"journal":{"name":"IEEE Transactions on Machine Learning in Communications and Networking","volume":"3 ","pages":"1161-1177"},"PeriodicalIF":0.0,"publicationDate":"2025-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11206744","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145405439","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Deep Learning With Width-Wise Early Exiting and Rejection for Computational Efficient and Trustworthy Modulation Classification 基于宽度提前退出和拒绝的深度学习的高效可信调制分类
Pub Date : 2025-10-08 DOI: 10.1109/TMLCN.2025.3619447
Dieter Verbruggen;Hazem Sallouha;Sofie Pollin
The development of trustworthy and efficient Deep Learning (DL) models is vital for wireless communications, supporting tasks such as automatic modulation classification (AMC), spectrum use, and network optimization. Yet, deploying DL on resource-constrained edge devices remains challenging due to energy and reliability concerns. We propose a width-wise early exiting architecture, a variation of conventional early exiting that enables classification after processing only part of a signal frame. To further improve reliability, we introduce an early rejection mechanism, applying confidence-based abstention both at intermediate exits and the final output. In AMC experiments, our model achieves on average 40% less computation (up to 60% in some cases), while improving classification accuracy by 3% in low-SNR conditions. These results highlight the potential of our approach for robust, efficient, and trustworthy ML deployment in wireless environments.
开发可靠且高效的深度学习(DL)模型对于无线通信至关重要,它支持自动调制分类(AMC)、频谱使用和网络优化等任务。然而,由于能源和可靠性问题,在资源受限的边缘设备上部署DL仍然具有挑战性。我们提出了一种宽度方向的提前退出架构,这是传统提前退出架构的一种变体,可以在处理信号帧的一部分后进行分类。为了进一步提高可靠性,我们引入了一种早期拒绝机制,在中间出口和最终输出都应用基于置信度的弃权。在AMC实验中,我们的模型平均减少了40%的计算量(在某些情况下高达60%),同时在低信噪比条件下将分类精度提高了3%。这些结果突出了我们的方法在无线环境中实现健壮、高效和值得信赖的机器学习部署的潜力。
{"title":"Deep Learning With Width-Wise Early Exiting and Rejection for Computational Efficient and Trustworthy Modulation Classification","authors":"Dieter Verbruggen;Hazem Sallouha;Sofie Pollin","doi":"10.1109/TMLCN.2025.3619447","DOIUrl":"https://doi.org/10.1109/TMLCN.2025.3619447","url":null,"abstract":"The development of trustworthy and efficient Deep Learning (DL) models is vital for wireless communications, supporting tasks such as automatic modulation classification (AMC), spectrum use, and network optimization. Yet, deploying DL on resource-constrained edge devices remains challenging due to energy and reliability concerns. We propose a width-wise early exiting architecture, a variation of conventional early exiting that enables classification after processing only part of a signal frame. To further improve reliability, we introduce an early rejection mechanism, applying confidence-based abstention both at intermediate exits and the final output. In AMC experiments, our model achieves on average 40% less computation (up to 60% in some cases), while improving classification accuracy by 3% in low-SNR conditions. These results highlight the potential of our approach for robust, efficient, and trustworthy ML deployment in wireless environments.","PeriodicalId":100641,"journal":{"name":"IEEE Transactions on Machine Learning in Communications and Networking","volume":"3 ","pages":"1143-1159"},"PeriodicalIF":0.0,"publicationDate":"2025-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11197046","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145352235","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
SC-Diffusion: Parameter Generation for Task-Oriented Semantic Communication Systems via Conditional Diffusion Model sc -扩散:基于条件扩散模型的面向任务的语义通信系统参数生成
Pub Date : 2025-10-07 DOI: 10.1109/TMLCN.2025.3618802
Yanhu Wang;Shuang Zhang;Anbang Zhang;Shuping Dang;Han Zhang;Shuaishuai Guo
Task-oriented semantic communications (ToSC) has received significant attention as a promising paradigm for realizing more efficient and intelligent data services. However, ToSC systems often suffer from limited generalization capabilities, requiring retraining to meet performance demands under varying channel conditions. In recent years, artificial intelligence generated content (AIGC) has shone in computer vision (CV) and natural language processing (NLP), and its potential in wireless communications is also emerging. Motivated by these advances, we propose semantic communications (SC)-diffusion in this paper, which generates high-performance parameters for ToSC systems to address the inherent challenges of semantic communications. Specifically, SC-diffusion begins by using an autoencoder to extract latent representations from trained system parameters. A diffusion model is then trained to generate these latent representations from random noise. In particular, to ensure that the generated parameters are adapted to the real-time communication environment, we incorporate channel information as conditional information into the diffusion model. Finally, the latent representations are decoded by the autoencoder’s decoder to yield the final system parameters. In experiments across various ToSC architectures and real-world datasets, SC-diffusion consistently generates models that perform comparable to or better than the original trained models, with minimal additional computational overhead.
面向任务的语义通信(Task-oriented semantic communication, ToSC)作为一种实现更高效、更智能的数据服务的有前途的范式,受到了广泛的关注。然而,ToSC系统通常泛化能力有限,需要重新训练以满足不同信道条件下的性能需求。近年来,人工智能生成内容(AIGC)在计算机视觉(CV)和自然语言处理(NLP)领域大放异彩,其在无线通信领域的潜力也在不断显现。在这些进展的推动下,我们在本文中提出了语义通信(SC)扩散,它为ToSC系统生成高性能参数,以解决语义通信的固有挑战。具体来说,sc扩散首先使用自编码器从训练好的系统参数中提取潜在表征。然后训练扩散模型来从随机噪声中生成这些潜在表示。特别是,为了确保生成的参数适应实时通信环境,我们将通道信息作为条件信息纳入扩散模型。最后,由自编码器的解码器对潜在表示进行解码,以产生最终的系统参数。在跨各种ToSC架构和真实数据集的实验中,SC-diffusion始终生成与原始训练模型相当或更好的模型,并且具有最小的额外计算开销。
{"title":"SC-Diffusion: Parameter Generation for Task-Oriented Semantic Communication Systems via Conditional Diffusion Model","authors":"Yanhu Wang;Shuang Zhang;Anbang Zhang;Shuping Dang;Han Zhang;Shuaishuai Guo","doi":"10.1109/TMLCN.2025.3618802","DOIUrl":"https://doi.org/10.1109/TMLCN.2025.3618802","url":null,"abstract":"Task-oriented semantic communications (ToSC) has received significant attention as a promising paradigm for realizing more efficient and intelligent data services. However, ToSC systems often suffer from limited generalization capabilities, requiring retraining to meet performance demands under varying channel conditions. In recent years, artificial intelligence generated content (AIGC) has shone in computer vision (CV) and natural language processing (NLP), and its potential in wireless communications is also emerging. Motivated by these advances, we propose semantic communications (SC)-diffusion in this paper, which generates high-performance parameters for ToSC systems to address the inherent challenges of semantic communications. Specifically, SC-diffusion begins by using an autoencoder to extract latent representations from trained system parameters. A diffusion model is then trained to generate these latent representations from random noise. In particular, to ensure that the generated parameters are adapted to the real-time communication environment, we incorporate channel information as conditional information into the diffusion model. Finally, the latent representations are decoded by the autoencoder’s decoder to yield the final system parameters. In experiments across various ToSC architectures and real-world datasets, SC-diffusion consistently generates models that perform comparable to or better than the original trained models, with minimal additional computational overhead.","PeriodicalId":100641,"journal":{"name":"IEEE Transactions on Machine Learning in Communications and Networking","volume":"3 ","pages":"1108-1120"},"PeriodicalIF":0.0,"publicationDate":"2025-10-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11195863","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145315525","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
IEEE Transactions on Machine Learning in Communications and Networking
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1