首页 > 最新文献

IEEE Transactions on Machine Learning in Communications and Networking最新文献

英文 中文
Signal Whisperers: Enhancing Wireless Reception Using DRL-Guided Reflector Arrays 信号低语者:利用drl制导反射器阵列增强无线接收
Pub Date : 2026-01-01 DOI: 10.1109/TMLCN.2025.3650440
Hieu Le;Oguz Bedir;Mostafa Ibrahim;Jian Tao;Sabit Ekin
This paper presents a multi-agent reinforcement learning (MARL) approach for controlling adjustable metallic reflector arrays to enhance wireless signal reception in non-line-of-sight (NLOS) scenarios. Unlike conventional reconfigurable intelligent surfaces (RIS) that require complex channel estimation, our system employs a centralized training with decentralized execution (CTDE) paradigm where individual agents corresponding to reflector segments autonomously optimize reflector element orientation in three-dimensional space using spatial intelligence based on user location information. Through extensive ray-tracing simulations with dynamic user mobility, the proposed multi-agent beam-focusing framework demonstrates substantial performance improvements over single-agent reinforcement learning baselines, while maintaining rapid adaptation to user movement within one simulation step. Comprehensive evaluation across varying user densities and reflector configurations validates system scalability and robustness. The results demonstrate the potential of learning-based approaches for adaptive wireless propagation control.
本文提出了一种多智能体强化学习(MARL)方法,用于控制可调金属反射器阵列,以增强非视距(NLOS)场景下的无线信号接收。与需要复杂信道估计的传统可重构智能曲面(RIS)不同,我们的系统采用集中式训练与分散执行(CTDE)范式,其中对应反射器段的单个代理使用基于用户位置信息的空间智能自主优化反射器元素在三维空间中的方向。通过大量具有动态用户移动性的光线跟踪模拟,所提出的多智能体光束聚焦框架比单智能体强化学习基线显示了显著的性能改进,同时在一个模拟步骤内保持对用户移动的快速适应。对不同用户密度和反射器配置的综合评估验证了系统的可扩展性和健壮性。结果显示了基于学习的自适应无线传播控制方法的潜力。
{"title":"Signal Whisperers: Enhancing Wireless Reception Using DRL-Guided Reflector Arrays","authors":"Hieu Le;Oguz Bedir;Mostafa Ibrahim;Jian Tao;Sabit Ekin","doi":"10.1109/TMLCN.2025.3650440","DOIUrl":"https://doi.org/10.1109/TMLCN.2025.3650440","url":null,"abstract":"This paper presents a multi-agent reinforcement learning (MARL) approach for controlling adjustable metallic reflector arrays to enhance wireless signal reception in non-line-of-sight (NLOS) scenarios. Unlike conventional reconfigurable intelligent surfaces (RIS) that require complex channel estimation, our system employs a centralized training with decentralized execution (CTDE) paradigm where individual agents corresponding to reflector segments autonomously optimize reflector element orientation in three-dimensional space using spatial intelligence based on user location information. Through extensive ray-tracing simulations with dynamic user mobility, the proposed multi-agent beam-focusing framework demonstrates substantial performance improvements over single-agent reinforcement learning baselines, while maintaining rapid adaptation to user movement within one simulation step. Comprehensive evaluation across varying user densities and reflector configurations validates system scalability and robustness. The results demonstrate the potential of learning-based approaches for adaptive wireless propagation control.","PeriodicalId":100641,"journal":{"name":"IEEE Transactions on Machine Learning in Communications and Networking","volume":"4 ","pages":"265-281"},"PeriodicalIF":0.0,"publicationDate":"2026-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11322690","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145929653","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Clustering-Assisted Deep Reinforcement Learning for Joint Trajectory Design and Resource Allocation in Two-Tier-Cooperated UAVs Communications 基于聚类辅助深度强化学习的两层协同无人机通信联合轨迹设计与资源分配
Pub Date : 2025-12-23 DOI: 10.1109/TMLCN.2025.3647806
Shujun Zhao;Simeng Feng;Chao Dong;Xiaojun Zhu;Qihui Wu
Considering their high mobility and relatively low cost, uncrewed aerial vehicles (UAVs) equipped with mobile base stations are regarded as a potential technological approach. However, the dual pressures of limited onboard resources of UAVs and the demand for high-quality services in dynamic low-altitude applications jointly form a bottleneck for system performance. Although multi-UAVs communication networks can provide higher system performance through coordinated deployment, the challenges of cooperation and competition among UAVs, as well as more complex optimization problems, significantly increase costs and pose formidable challenges. To overcome the challenges of low coordination efficiency and intense resource competition among multiple UAVs, and to ensure the timely and efficient satisfaction of ground users (GUs) communication service demands, this paper conceives a centralized-controlled two-tier-cooperated UAVs communication network. The network comprises a central UAV (C-UAV) tier as control center and a marginal UAV (M-UAV) tier to serve GUs. In response to the increasingly dynamic and complex scenarios, along with the challenge of insufficient generalization ability in Deep Reinforcement Learning (DRL) algorithms, we propose a clustering-assisted dual-agent soft actor critic (CDA-SAC) algorithm for trajectory design and resource allocation, aiming to maximize the fair energy efficiency of the system. Specifically, by integrating a clustering-matching method with a dual-agent strategy, the proposed CDA-SAC algorithm achieves significant improvements in generalization ability and exploration capability. Simulation results demonstrate that the proposed CDA-SAC algorithm can be deployed without retraining in scenarios with different numbers of GUs. Furthermore, the CDA-SAC algorithm outperforms both the multi-UAV scenarios based on the MADDPG algorithm and the FDMA scheme in terms of fairness and total energy efficiency.
考虑到其高机动性和相对较低的成本,配备移动基站的无人机(uav)被认为是一种潜在的技术途径。然而,无人机有限的机载资源和动态低空应用对高质量服务需求的双重压力,共同构成了系统性能的瓶颈。虽然多无人机通信网络通过协同部署可以提供更高的系统性能,但无人机之间的合作和竞争带来的挑战,以及更复杂的优化问题,大大增加了成本,带来了巨大的挑战。为了克服多架无人机之间协调效率低、资源竞争激烈的问题,保证及时、高效地满足地面用户通信服务需求,本文构想了集中控制的两层协同无人机通信网络。该网络由中心无人机层(C-UAV)作为控制中心,边缘无人机层(M-UAV)为GUs提供服务。针对日益动态和复杂的场景,以及深度强化学习(DRL)算法泛化能力不足的挑战,提出了一种聚类辅助双智能体软行为批评家(CDA-SAC)算法,用于轨迹设计和资源分配,以最大限度地提高系统的公平能量效率。具体而言,通过将聚类匹配方法与双智能体策略相结合,所提出的CDA-SAC算法在泛化能力和探索能力方面有了显著提高。仿真结果表明,所提出的CDA-SAC算法可以在不同GUs数量的场景下无需重新训练即可部署。此外,CDA-SAC算法在公平性和总能量效率方面都优于基于madpg算法和FDMA方案的多无人机场景。
{"title":"Clustering-Assisted Deep Reinforcement Learning for Joint Trajectory Design and Resource Allocation in Two-Tier-Cooperated UAVs Communications","authors":"Shujun Zhao;Simeng Feng;Chao Dong;Xiaojun Zhu;Qihui Wu","doi":"10.1109/TMLCN.2025.3647806","DOIUrl":"https://doi.org/10.1109/TMLCN.2025.3647806","url":null,"abstract":"Considering their high mobility and relatively low cost, uncrewed aerial vehicles (UAVs) equipped with mobile base stations are regarded as a potential technological approach. However, the dual pressures of limited onboard resources of UAVs and the demand for high-quality services in dynamic low-altitude applications jointly form a bottleneck for system performance. Although multi-UAVs communication networks can provide higher system performance through coordinated deployment, the challenges of cooperation and competition among UAVs, as well as more complex optimization problems, significantly increase costs and pose formidable challenges. To overcome the challenges of low coordination efficiency and intense resource competition among multiple UAVs, and to ensure the timely and efficient satisfaction of ground users (GUs) communication service demands, this paper conceives a centralized-controlled two-tier-cooperated UAVs communication network. The network comprises a central UAV (C-UAV) tier as control center and a marginal UAV (M-UAV) tier to serve GUs. In response to the increasingly dynamic and complex scenarios, along with the challenge of insufficient generalization ability in Deep Reinforcement Learning (DRL) algorithms, we propose a clustering-assisted dual-agent soft actor critic (CDA-SAC) algorithm for trajectory design and resource allocation, aiming to maximize the fair energy efficiency of the system. Specifically, by integrating a clustering-matching method with a dual-agent strategy, the proposed CDA-SAC algorithm achieves significant improvements in generalization ability and exploration capability. Simulation results demonstrate that the proposed CDA-SAC algorithm can be deployed without retraining in scenarios with different numbers of GUs. Furthermore, the CDA-SAC algorithm outperforms both the multi-UAV scenarios based on the MADDPG algorithm and the FDMA scheme in terms of fairness and total energy efficiency.","PeriodicalId":100641,"journal":{"name":"IEEE Transactions on Machine Learning in Communications and Networking","volume":"4 ","pages":"178-197"},"PeriodicalIF":0.0,"publicationDate":"2025-12-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11313631","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145886662","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Data-Driven Cellular Mobility Management Via Bayesian Optimization and Reinforcement Learning 基于贝叶斯优化和强化学习的数据驱动蜂窝移动性管理
Pub Date : 2025-12-23 DOI: 10.1109/TMLCN.2025.3647807
Mohamed Benzaghta;Sahar Ammar;David López-Pére;Basem Shihada;Giovanni Geraci
Mobility management in cellular networks faces increasing complexity due to network densification and heterogeneous user mobility characteristics. Traditional handover (HO) mechanisms, which rely on predefined parameters such as A3-offset and time-to-trigger (TTT), often fail to optimize mobility performance across varying speeds and deployment conditions. Fixed A3-offset and TTT configurations either delay HOs, increasing radio link failures (RLFs), or accelerate them, leading to excessive ping-pong effects. To address these challenges, we propose two distinct data-driven mobility management approaches leveraging high-dimensional Bayesian optimization (HD-BO) and deep reinforcement learning (DRL). While HD-BO optimizes predefined HO parameters such as A3-offset and TTT, DRL provides a parameter-free alternative by allowing an agent to select serving cells based on real-time network conditions. We systematically compare these two approaches in real-world site-specific deployment scenarios (employing Sionna ray tracing for site-specific channel propagation modeling), highlighting their complementary strengths. Results show that both HD-BO and DRL outperform 3GPP set-1 (TTT of 480 ms and A3-offset of 3 dB) and set-5 (TTT of 40 ms and A3-offset of −1 dB) benchmarks. We augment HD-BO with transfer learning so it can generalize across a range of user speeds. Applying the same transfer-learning strategy to the DRL method reduces its training time by a factor of 2.5 while preserving optimal HO performance, showing that it adapts efficiently to the mobility of aerial users such as UAVs. Simulations further reveal that HD-BO remains more sample-efficient than DRL, making it more suitable for scenarios with limited training data.
由于蜂窝网络的密集化和用户移动性的异构特性,使得蜂窝网络的移动性管理变得越来越复杂。传统的切换(HO)机制依赖于预定义的参数,如a3偏移量和触发时间(TTT),通常无法在不同的速度和部署条件下优化移动性性能。固定的a3偏移和TTT配置延迟HOs,增加无线电链路故障(rlf),或加速它们,导致过度的乒乓效应。为了应对这些挑战,我们提出了两种不同的数据驱动的移动性管理方法,利用高维贝叶斯优化(HD-BO)和深度强化学习(DRL)。HD-BO优化了预定义的HO参数,如a3偏移量和TTT,而DRL提供了一种无参数的替代方案,允许代理根据实时网络条件选择服务单元。我们系统地比较了这两种方法在现实世界中特定站点的部署场景(使用辛纳射线跟踪进行特定站点的通道传播建模),突出了它们的互补优势。结果表明,HD-BO和DRL都优于3GPP set-1 (TTT为480 ms, A3-offset为3 dB)和set-5 (TTT为40 ms, A3-offset为-1 dB)基准。我们用迁移学习来增强HD-BO,这样它就可以在各种用户速度范围内进行泛化。将相同的迁移学习策略应用于DRL方法,在保持最佳HO性能的同时,将其训练时间减少了2.5倍,表明该方法能够有效地适应无人机等空中用户的移动性。仿真进一步表明,HD-BO仍然比DRL具有更高的样本效率,使其更适合训练数据有限的场景。
{"title":"Data-Driven Cellular Mobility Management Via Bayesian Optimization and Reinforcement Learning","authors":"Mohamed Benzaghta;Sahar Ammar;David López-Pére;Basem Shihada;Giovanni Geraci","doi":"10.1109/TMLCN.2025.3647807","DOIUrl":"https://doi.org/10.1109/TMLCN.2025.3647807","url":null,"abstract":"Mobility management in cellular networks faces increasing complexity due to network densification and heterogeneous user mobility characteristics. Traditional handover (HO) mechanisms, which rely on predefined parameters such as A3-offset and time-to-trigger (TTT), often fail to optimize mobility performance across varying speeds and deployment conditions. Fixed A3-offset and TTT configurations either delay HOs, increasing radio link failures (RLFs), or accelerate them, leading to excessive ping-pong effects. To address these challenges, we propose two distinct data-driven mobility management approaches leveraging high-dimensional Bayesian optimization (HD-BO) and deep reinforcement learning (DRL). While HD-BO optimizes predefined HO parameters such as A3-offset and TTT, DRL provides a parameter-free alternative by allowing an agent to select serving cells based on real-time network conditions. We systematically compare these two approaches in real-world site-specific deployment scenarios (employing Sionna ray tracing for site-specific channel propagation modeling), highlighting their complementary strengths. Results show that both HD-BO and DRL outperform 3GPP set-1 (TTT of 480 ms and A3-offset of 3 dB) and set-5 (TTT of 40 ms and A3-offset of −1 dB) benchmarks. We augment HD-BO with transfer learning so it can generalize across a range of user speeds. Applying the same transfer-learning strategy to the DRL method reduces its training time by a factor of 2.5 while preserving optimal HO performance, showing that it adapts efficiently to the mobility of aerial users such as UAVs. Simulations further reveal that HD-BO remains more sample-efficient than DRL, making it more suitable for scenarios with limited training data.","PeriodicalId":100641,"journal":{"name":"IEEE Transactions on Machine Learning in Communications and Networking","volume":"4 ","pages":"228-244"},"PeriodicalIF":0.0,"publicationDate":"2025-12-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11313634","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145929689","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Transforming Indoor Localization: Advanced Transformer Architecture for NLOS Dominated Wireless Environments With Distributed Sensors 改造室内定位:基于分布式传感器的NLOS主导无线环境的先进变压器架构
Pub Date : 2025-12-23 DOI: 10.1109/TMLCN.2025.3647376
Saad Masrur;Jung-Fu Cheng;Atieh R. Khamesi;İsmail Güvenç
Indoor localization in challenging non-line-of-sight (NLOS) environments often leads to poor accuracy with traditional approaches. Deep learning (DL) has been applied to tackle these challenges; however, many DL approaches overlook computational complexity, especially for floating-point operations (FLOPs), making them unsuitable for resource-limited devices. Transformer-based models have achieved remarkable success in natural language processing (NLP) and computer vision (CV) tasks, motivating their use in wireless applications. However, their use in indoor localization remains nascent, and directly applying Transformers for indoor localization can be both computationally intensive and exhibit limitations in accuracy. To address these challenges, in this work, we introduce a novel tokenization approach, referred to as Sensor Snapshot Tokenization (SST), which preserves variable-specific representations of power delay profile (PDP) and enhances attention mechanisms by effectively capturing multi-variate correlation. Complementing this, we propose a lightweight Swish-Gated Linear Unit-based Transformer (L-SwiGLU-T) model, designed to reduce computational complexity without compromising localization accuracy. Together, these contributions mitigate the computational burden and dependency on large datasets, making Transformer models more efficient and suitable for resource-constrained scenarios. Experimental results on simulated and real-world datasets demonstrate that SST and L-SwiGLU-T achieve substantial accuracy and efficiency gains, outperforming larger Transformer and CNN baselines by over 40% while using significantly fewer FLOPs and training samples.
在具有挑战性的非视距(NLOS)环境中,传统方法的室内定位精度往往较差。深度学习(DL)已被应用于应对这些挑战;然而,许多深度学习方法忽略了计算复杂性,特别是对于浮点操作(flop),这使得它们不适合资源有限的设备。基于变压器的模型在自然语言处理(NLP)和计算机视觉(CV)任务中取得了显著的成功,促进了它们在无线应用中的应用。然而,它们在室内定位中的应用仍处于起步阶段,直接应用变压器进行室内定位既需要大量计算,又存在精度限制。为了应对这些挑战,在这项工作中,我们引入了一种新的标记化方法,称为传感器快照标记化(SST),它保留了功率延迟曲线(PDP)的变量特定表示,并通过有效捕获多变量相关性来增强注意机制。作为补充,我们提出了一种轻量级的swish门控线性单元变压器(L-SwiGLU-T)模型,旨在降低计算复杂性而不影响定位精度。总之,这些贡献减轻了计算负担和对大型数据集的依赖,使Transformer模型更有效,更适合资源受限的场景。在模拟和真实数据集上的实验结果表明,SST和L-SwiGLU-T取得了可观的精度和效率提升,在使用更少的FLOPs和训练样本的同时,比大型Transformer和CNN基线的性能提高了40%以上。
{"title":"Transforming Indoor Localization: Advanced Transformer Architecture for NLOS Dominated Wireless Environments With Distributed Sensors","authors":"Saad Masrur;Jung-Fu Cheng;Atieh R. Khamesi;İsmail Güvenç","doi":"10.1109/TMLCN.2025.3647376","DOIUrl":"https://doi.org/10.1109/TMLCN.2025.3647376","url":null,"abstract":"Indoor localization in challenging non-line-of-sight (NLOS) environments often leads to poor accuracy with traditional approaches. Deep learning (DL) has been applied to tackle these challenges; however, many DL approaches overlook computational complexity, especially for floating-point operations (FLOPs), making them unsuitable for resource-limited devices. Transformer-based models have achieved remarkable success in natural language processing (NLP) and computer vision (CV) tasks, motivating their use in wireless applications. However, their use in indoor localization remains nascent, and directly applying Transformers for indoor localization can be both computationally intensive and exhibit limitations in accuracy. To address these challenges, in this work, we introduce a novel tokenization approach, referred to as Sensor Snapshot Tokenization (SST), which preserves variable-specific representations of power delay profile (PDP) and enhances attention mechanisms by effectively capturing multi-variate correlation. Complementing this, we propose a lightweight Swish-Gated Linear Unit-based Transformer (L-SwiGLU-T) model, designed to reduce computational complexity without compromising localization accuracy. Together, these contributions mitigate the computational burden and dependency on large datasets, making Transformer models more efficient and suitable for resource-constrained scenarios. Experimental results on simulated and real-world datasets demonstrate that SST and L-SwiGLU-T achieve substantial accuracy and efficiency gains, outperforming larger Transformer and CNN baselines by over 40% while using significantly fewer FLOPs and training samples.","PeriodicalId":100641,"journal":{"name":"IEEE Transactions on Machine Learning in Communications and Networking","volume":"4 ","pages":"161-177"},"PeriodicalIF":0.0,"publicationDate":"2025-12-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11313538","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145886663","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Explainable Multi-Agent Reinforcement Learning for Extended Reality Codec Adaptation 扩展现实编解码器适应的可解释多智能体强化学习
Pub Date : 2025-12-18 DOI: 10.1109/TMLCN.2025.3646125
Pedro Enrique Iturria-Rivera;Raimundas Gaigalas;Medhat Elsayed;Majid Bavand;Yigit Ozcan;Melike Erol-Kantarci
Extended Reality (XR) services are set to transform applications over ${mathbf {5}}^{th}$ and ${mathbf {6}}^{th}$ generation wireless networks, delivering immersive experiences. Concurrently, Artificial Intelligence (AI) advancements have expanded their role in wireless networks, however, trust and transparency in AI remain to be strengthened. Thus, providing explanations for AI-enabled systems can enhance trust. We introduce Value Function Factorization (VFF)-based Explainable (X) Multi-Agent Reinforcement Learning (MARL) algorithms, explaining reward design in XR codec adaptation through reward decomposition. We contribute four enhancements to XMARL algorithms. Firstly, we detail architectural modifications to enable reward decomposition in VFF-based MARL algorithms: Value Decomposition Networks (VDN), Mixture of Q-Values (QMIX), and Q-Transformation (Q-TRAN). Secondly, inspired by multi-task learning, we reduce the overhead of vanilla XMARL algorithms. Thirdly, we propose a new explainability metric, Reward Difference Fluctuation Explanation (RDFX), suitable for problems with adjustable parameters. Lastly, we propose adaptive XMARL, leveraging network gradients and reward decomposition for improved action selection. Simulation results indicate that, in XR codec adaptation, the Packet Delivery Ratio reward is the primary contributor to optimal performance compared to the initial composite reward, which included delay and Data Rate Ratio components. Modifications to VFF-based XMARL algorithms, incorporating multi-headed structures and adaptive loss functions, enable the best-performing algorithm, Multi-Headed Adaptive (MHA)-QMIX, to achieve significant average gains over the Adjust Packet Size baseline up to 10.7%, 41.4%, 33.3%, and 67.9% in XR index, jitter, delay, and Packet Loss Ratio (PLR), respectively.
扩展现实(XR)服务将通过${mathbf {5}}^{th}$和${mathbf {6}}^{th}$一代无线网络转换应用程序,提供沉浸式体验。与此同时,人工智能(AI)的发展扩大了其在无线网络中的作用,但对人工智能的信任和透明度仍有待加强。因此,为人工智能系统提供解释可以增强信任。我们引入了基于价值函数分解(VFF)的可解释(X)多智能体强化学习(MARL)算法,通过奖励分解来解释XR编解码器适应中的奖励设计。我们对xml算法提供了四个增强。首先,我们详细介绍了基于vff的MARL算法的架构修改,以实现奖励分解:价值分解网络(VDN)、q值混合(QMIX)和q变换(Q-TRAN)。其次,受多任务学习的启发,我们减少了普通xhtml算法的开销。第三,我们提出了一个新的可解释性度量,奖励差异波动解释(RDFX),适用于可调参数问题。最后,我们提出了自适应xml,利用网络梯度和奖励分解来改进行动选择。仿真结果表明,在XR编解码器自适应中,与包含延迟和数据速率比组件的初始复合奖励相比,包投递率奖励是最优性能的主要贡献者。对基于vff的XMARL算法进行修改,加入多头结构和自适应损失函数,使性能最好的算法multi-headed adaptive (MHA)-QMIX在XR指数、抖动、延迟和丢包率(PLR)方面的平均增益分别达到10.7%、41.4%、33.3%和67.9%。
{"title":"Explainable Multi-Agent Reinforcement Learning for Extended Reality Codec Adaptation","authors":"Pedro Enrique Iturria-Rivera;Raimundas Gaigalas;Medhat Elsayed;Majid Bavand;Yigit Ozcan;Melike Erol-Kantarci","doi":"10.1109/TMLCN.2025.3646125","DOIUrl":"https://doi.org/10.1109/TMLCN.2025.3646125","url":null,"abstract":"Extended Reality (XR) services are set to transform applications over <inline-formula> <tex-math>${mathbf {5}}^{th}$ </tex-math></inline-formula> and <inline-formula> <tex-math>${mathbf {6}}^{th}$ </tex-math></inline-formula> generation wireless networks, delivering immersive experiences. Concurrently, Artificial Intelligence (AI) advancements have expanded their role in wireless networks, however, trust and transparency in AI remain to be strengthened. Thus, providing explanations for AI-enabled systems can enhance trust. We introduce Value Function Factorization (VFF)-based Explainable (X) Multi-Agent Reinforcement Learning (MARL) algorithms, explaining reward design in XR codec adaptation through reward decomposition. We contribute four enhancements to XMARL algorithms. Firstly, we detail architectural modifications to enable reward decomposition in VFF-based MARL algorithms: Value Decomposition Networks (VDN), Mixture of Q-Values (QMIX), and Q-Transformation (Q-TRAN). Secondly, inspired by multi-task learning, we reduce the overhead of vanilla XMARL algorithms. Thirdly, we propose a new explainability metric, Reward Difference Fluctuation Explanation (RDFX), suitable for problems with adjustable parameters. Lastly, we propose adaptive XMARL, leveraging network gradients and reward decomposition for improved action selection. Simulation results indicate that, in XR codec adaptation, the Packet Delivery Ratio reward is the primary contributor to optimal performance compared to the initial composite reward, which included delay and Data Rate Ratio components. Modifications to VFF-based XMARL algorithms, incorporating multi-headed structures and adaptive loss functions, enable the best-performing algorithm, Multi-Headed Adaptive (MHA)-QMIX, to achieve significant average gains over the Adjust Packet Size baseline up to 10.7%, 41.4%, 33.3%, and 67.9% in XR index, jitter, delay, and Packet Loss Ratio (PLR), respectively.","PeriodicalId":100641,"journal":{"name":"IEEE Transactions on Machine Learning in Communications and Networking","volume":"4 ","pages":"245-264"},"PeriodicalIF":0.0,"publicationDate":"2025-12-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11303975","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145929688","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
AIS-Based Hybrid Vessel Trajectory Prediction for Enhanced Maritime Navigation 基于ais的船舶混合轨迹预测增强海上导航
Pub Date : 2025-12-16 DOI: 10.1109/TMLCN.2025.3644333
Ons Aouedi;Flor Ortiz;Thang X. Vu;Alexandre Lefourn;Felix Giese;Guillermo Gutierrez;Symeon Chatzinotas
The growing integration of non-terrestrial networks (NTNs), particularly low Earth orbit (LEO) satellite constellations, has significantly extended the reach of maritime connectivity, supporting critical applications such as vessel monitoring, navigation safety, and maritime surveillance in remote and oceanic regions. Automatic Identification System (AIS) data, increasingly collected through a combination of satellite and terrestrial infrastructures, provide a rich source of spatiotemporal vessel information. However, accurate trajectory prediction in maritime domains remains challenging due to irregular sampling rates, dynamic environmental conditions, and heterogeneous vessel behaviors. This study proposes a velocity-based trajectory prediction framework that leverages AIS data collected from integrated satellite–terrestrial networks. Rather than directly predicting absolute positions (latitude and longitude), our model predicts vessel motion in the form of latitude and longitude velocities. This formulation simplifies the learning task, enhances temporal continuity, and improves scalability, making it well-suited for resource-constrained NTN environments. The predictive architecture is built upon a Long Short-Term Memory network enhanced with attention mechanisms and residual connections (LSTM-RA), enabling it to capture complex temporal dependencies and adapt to noise in real-world AIS data. Extensive experiments on two maritime datasets validate the robustness and accuracy of our framework, demonstrating clear improvements over state-of-the-art baselines.
非地面网络(ntn)的日益融合,特别是低地球轨道(LEO)卫星星座,大大扩展了海上连通性的范围,支持船舶监测、导航安全和偏远和海洋地区的海上监视等关键应用。自动识别系统(AIS)数据越来越多地通过卫星和地面基础设施的结合收集,提供了丰富的时空船舶信息来源。然而,由于不规则的采样率、动态环境条件和异质船舶行为,在海洋领域进行准确的轨迹预测仍然具有挑战性。本研究提出了一种基于速度的轨迹预测框架,该框架利用从卫星-地面综合网络收集的AIS数据。我们的模型不是直接预测绝对位置(纬度和经度),而是以纬度和经度速度的形式预测船舶运动。该公式简化了学习任务,增强了时间连续性,提高了可扩展性,使其非常适合资源受限的NTN环境。预测架构建立在长短期记忆网络的基础上,增强了注意机制和残余连接(LSTM-RA),使其能够捕捉复杂的时间依赖性,并适应现实AIS数据中的噪声。在两个海事数据集上进行的大量实验验证了我们的框架的稳健性和准确性,证明了比最先进的基线有明显的改进。
{"title":"AIS-Based Hybrid Vessel Trajectory Prediction for Enhanced Maritime Navigation","authors":"Ons Aouedi;Flor Ortiz;Thang X. Vu;Alexandre Lefourn;Felix Giese;Guillermo Gutierrez;Symeon Chatzinotas","doi":"10.1109/TMLCN.2025.3644333","DOIUrl":"https://doi.org/10.1109/TMLCN.2025.3644333","url":null,"abstract":"The growing integration of non-terrestrial networks (NTNs), particularly low Earth orbit (LEO) satellite constellations, has significantly extended the reach of maritime connectivity, supporting critical applications such as vessel monitoring, navigation safety, and maritime surveillance in remote and oceanic regions. Automatic Identification System (AIS) data, increasingly collected through a combination of satellite and terrestrial infrastructures, provide a rich source of spatiotemporal vessel information. However, accurate trajectory prediction in maritime domains remains challenging due to irregular sampling rates, dynamic environmental conditions, and heterogeneous vessel behaviors. This study proposes a velocity-based trajectory prediction framework that leverages AIS data collected from integrated satellite–terrestrial networks. Rather than directly predicting absolute positions (latitude and longitude), our model predicts vessel motion in the form of latitude and longitude velocities. This formulation simplifies the learning task, enhances temporal continuity, and improves scalability, making it well-suited for resource-constrained NTN environments. The predictive architecture is built upon a Long Short-Term Memory network enhanced with attention mechanisms and residual connections (<monospace>LSTM-RA</monospace>), enabling it to capture complex temporal dependencies and adapt to noise in real-world AIS data. Extensive experiments on two maritime datasets validate the robustness and accuracy of our framework, demonstrating clear improvements over state-of-the-art baselines.","PeriodicalId":100641,"journal":{"name":"IEEE Transactions on Machine Learning in Communications and Networking","volume":"4 ","pages":"198-210"},"PeriodicalIF":0.0,"publicationDate":"2025-12-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11301841","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145886584","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multi-Agent Federated Learning Using Covariance-Based Nearest Neighbor Gaussian Processes 基于协方差的最近邻高斯过程的多智能体联邦学习
Pub Date : 2025-12-12 DOI: 10.1109/TMLCN.2025.3643409
George P. Kontoudis;Daniel J. Stilwell
In this paper, we propose scalable methods for Gaussian process (GP) prediction in decentralized multi-agent systems. Multiple aggregation techniques for GP prediction are decentralized with the use of iterative and consensus methods. Moreover, we introduce a covariance-based nearest neighbor selection strategy that leverages cross-covariance similarity, enabling subsets of agents to make accurate predictions. The proposed decentralized schemes preserve the consistency properties of their centralized counterparts, while adhering to federated learning principles by restricting raw data exchange between agents. We validate the efficacy of the proposed decentralized algorithms with numerical experiments on real-world sea surface temperature and ground elevation map datasets across multiple fleet sizes.
本文提出了分散多智能体系统中高斯过程(GP)预测的可扩展方法。采用迭代法和共识法对GP预测的多重聚合技术进行了去中心化处理。此外,我们引入了基于协方差的最近邻选择策略,该策略利用交叉协方差相似性,使代理子集能够做出准确的预测。所提出的去中心化方案保留了中心化方案的一致性,同时通过限制代理之间的原始数据交换来坚持联邦学习原则。我们在多个船队规模的真实海面温度和地面高程图数据集上进行了数值实验,验证了所提出的分散算法的有效性。
{"title":"Multi-Agent Federated Learning Using Covariance-Based Nearest Neighbor Gaussian Processes","authors":"George P. Kontoudis;Daniel J. Stilwell","doi":"10.1109/TMLCN.2025.3643409","DOIUrl":"https://doi.org/10.1109/TMLCN.2025.3643409","url":null,"abstract":"In this paper, we propose scalable methods for Gaussian process (GP) prediction in decentralized multi-agent systems. Multiple aggregation techniques for GP prediction are decentralized with the use of iterative and consensus methods. Moreover, we introduce a covariance-based nearest neighbor selection strategy that leverages cross-covariance similarity, enabling subsets of agents to make accurate predictions. The proposed decentralized schemes preserve the consistency properties of their centralized counterparts, while adhering to federated learning principles by restricting raw data exchange between agents. We validate the efficacy of the proposed decentralized algorithms with numerical experiments on real-world sea surface temperature and ground elevation map datasets across multiple fleet sizes.","PeriodicalId":100641,"journal":{"name":"IEEE Transactions on Machine Learning in Communications and Networking","volume":"4 ","pages":"115-138"},"PeriodicalIF":0.0,"publicationDate":"2025-12-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11299094","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145778374","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Deeper Look on Explanation Methods for Deep Learning Models on Raw-Based Traffic of DDoS Attacks 基于原始流量的DDoS攻击深度学习模型解释方法研究
Pub Date : 2025-12-09 DOI: 10.1109/TMLCN.2025.3642211
Basil AsSadhan;Abdulmuneem Bashaiwth;Hamad Binsalleeh
With the increasing prevalence of DDoS attacks, various machine learning-based detection models have been employed to mitigate these malicious behaviors. Understanding how machine learning models function can be quite complex, especially for intricate and nonlinear models like deep learning architectures. Recently, various techniques have been advanced to interpret deep learning models and address issues of ambiguity. In this paper, we present a comprehensive analysis of various explanation methods that are applied to Long Short-Term Memory (LSTM) model for detecting Distributed Denial of Service (DDoS) attacks on raw traffic data. While previous studies have focused primarily on improving detection accuracy on feature-based datasets, this paper emphasizes the importance of interpretability in deep learning models on raw-based traffic datasets. By employing explanation techniques such as LIME, SHAP, Anchor, and LORE, we provide insights into the decision-making processes of LSTM models, thereby enhancing trust and understanding in classifying DDoS attacks. The use of raw-based network traffic revealed crucial packet fields that played an important role behind the true and false positive predictions of the LSTM model, as well as identifying common network fields among the DDoS attacks to justify the misclassifications between similar DDoS attacks.
随着DDoS攻击的日益流行,各种基于机器学习的检测模型已经被用来减轻这些恶意行为。理解机器学习模型的功能是非常复杂的,特别是对于像深度学习架构这样复杂的非线性模型。最近,各种技术已经被用于解释深度学习模型和解决歧义问题。在本文中,我们全面分析了用于检测原始流量数据的分布式拒绝服务(DDoS)攻击的长短期记忆(LSTM)模型的各种解释方法。虽然以前的研究主要集中在提高基于特征的数据集的检测精度上,但本文强调了基于原始交通数据集的深度学习模型的可解释性的重要性。通过使用LIME、SHAP、Anchor和LORE等解释技术,我们深入了解了LSTM模型的决策过程,从而增强了对DDoS攻击分类的信任和理解。原始网络流量的使用揭示了关键的数据包字段,这些字段在LSTM模型的真阳性和假阳性预测背后发挥了重要作用,以及识别DDoS攻击中的公共网络字段,以证明类似DDoS攻击之间的错误分类是正确的。
{"title":"A Deeper Look on Explanation Methods for Deep Learning Models on Raw-Based Traffic of DDoS Attacks","authors":"Basil AsSadhan;Abdulmuneem Bashaiwth;Hamad Binsalleeh","doi":"10.1109/TMLCN.2025.3642211","DOIUrl":"https://doi.org/10.1109/TMLCN.2025.3642211","url":null,"abstract":"With the increasing prevalence of DDoS attacks, various machine learning-based detection models have been employed to mitigate these malicious behaviors. Understanding how machine learning models function can be quite complex, especially for intricate and nonlinear models like deep learning architectures. Recently, various techniques have been advanced to interpret deep learning models and address issues of ambiguity. In this paper, we present a comprehensive analysis of various explanation methods that are applied to Long Short-Term Memory (LSTM) model for detecting Distributed Denial of Service (DDoS) attacks on raw traffic data. While previous studies have focused primarily on improving detection accuracy on feature-based datasets, this paper emphasizes the importance of interpretability in deep learning models on raw-based traffic datasets. By employing explanation techniques such as LIME, SHAP, Anchor, and LORE, we provide insights into the decision-making processes of LSTM models, thereby enhancing trust and understanding in classifying DDoS attacks. The use of raw-based network traffic revealed crucial packet fields that played an important role behind the true and false positive predictions of the LSTM model, as well as identifying common network fields among the DDoS attacks to justify the misclassifications between similar DDoS attacks.","PeriodicalId":100641,"journal":{"name":"IEEE Transactions on Machine Learning in Communications and Networking","volume":"4 ","pages":"139-160"},"PeriodicalIF":0.0,"publicationDate":"2025-12-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11289572","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145886661","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
IEEE Communications Society Board of Governors IEEE通信协会理事会
Pub Date : 2025-12-08 DOI: 10.1109/TMLCN.2025.3638067
{"title":"IEEE Communications Society Board of Governors","authors":"","doi":"10.1109/TMLCN.2025.3638067","DOIUrl":"https://doi.org/10.1109/TMLCN.2025.3638067","url":null,"abstract":"","PeriodicalId":100641,"journal":{"name":"IEEE Transactions on Machine Learning in Communications and Networking","volume":"4 ","pages":"C3-C3"},"PeriodicalIF":0.0,"publicationDate":"2025-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11283087","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145698226","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Adaptive Nonlinear Digital Self-Interference Cancellation for Full-Duplex Wireless Systems Using Hypernetwork-Based Incremental Learning 基于超网络增量学习的全双工无线系统自适应非线性数字自干扰消除
Pub Date : 2025-12-02 DOI: 10.1109/TMLCN.2025.3639365
Sheikh Islam;Xin Ma;Chunxiao Chigan
Achieving effective self-interference cancellation (SIC) in full-duplex (FD) wireless communication systems under time-varying channel conditions remains a significant challenge. To address this challenge, we propose a novel adaptive SIC solution through leveraging Hyper Neural Networks (HyperNet) and incremental learning (IL). Unlike the existing methods that rely on offline training or lack real-time adaptability, our approach enables autonomous learning and fast adaptation to the complex, nonlinear, and time-varying nature of self-interference (SI) channels. It effectively addresses dynamic adaptation challenges, such as catastrophic forgetting, through the use of experience replay (ER). Our experimental results show that traditional model-based methods exhibit limited adaptability under dynamic channel conditions, while conventional data-driven models fail to maintain consistent performance without the adaptive capabilities provided by IL. In contrast, the proposed HyperNet-based IL model reduces training time by 33% and achieves three times faster convergence compared to a standalone HyperNet trained separately for each static condition. Extensive evaluations using simulated datasets that emulate real-world scenarios demonstrate that our approach consistently achieves SI suppression down to the noise floor. It also delivers significantly lower computational complexity and training time. These improvements collectively enhance the efficiency and reliability of FD communication systems operating in dynamic wireless environments.
在时变信道条件下实现全双工(FD)无线通信系统的有效自干扰消除(SIC)仍然是一个重大挑战。为了应对这一挑战,我们通过利用超神经网络(HyperNet)和增量学习(IL)提出了一种新的自适应SIC解决方案。与现有依赖离线训练或缺乏实时适应性的方法不同,我们的方法能够自主学习并快速适应自干扰(SI)通道的复杂、非线性和时变性质。它通过使用经验回放(ER)有效地解决了动态适应挑战,例如灾难性遗忘。我们的实验结果表明,传统的基于模型的方法在动态通道条件下表现出有限的适应性,而传统的数据驱动模型在没有IL提供的自适应能力的情况下无法保持一致的性能。相比之下,所提出的基于HyperNet的IL模型与在每个静态条件下单独训练的HyperNet相比,训练时间减少了33%,收敛速度提高了三倍。使用模拟真实世界场景的模拟数据集进行的广泛评估表明,我们的方法始终如一地实现了低至噪声底的SI抑制。它还显著降低了计算复杂度和训练时间。这些改进共同提高了动态无线环境下FD通信系统的效率和可靠性。
{"title":"Adaptive Nonlinear Digital Self-Interference Cancellation for Full-Duplex Wireless Systems Using Hypernetwork-Based Incremental Learning","authors":"Sheikh Islam;Xin Ma;Chunxiao Chigan","doi":"10.1109/TMLCN.2025.3639365","DOIUrl":"https://doi.org/10.1109/TMLCN.2025.3639365","url":null,"abstract":"Achieving effective self-interference cancellation (SIC) in full-duplex (FD) wireless communication systems under time-varying channel conditions remains a significant challenge. To address this challenge, we propose a novel adaptive SIC solution through leveraging Hyper Neural Networks (HyperNet) and incremental learning (IL). Unlike the existing methods that rely on offline training or lack real-time adaptability, our approach enables autonomous learning and fast adaptation to the complex, nonlinear, and time-varying nature of self-interference (SI) channels. It effectively addresses dynamic adaptation challenges, such as catastrophic forgetting, through the use of experience replay (ER). Our experimental results show that traditional model-based methods exhibit limited adaptability under dynamic channel conditions, while conventional data-driven models fail to maintain consistent performance without the adaptive capabilities provided by IL. In contrast, the proposed HyperNet-based IL model reduces training time by 33% and achieves three times faster convergence compared to a standalone HyperNet trained separately for each static condition. Extensive evaluations using simulated datasets that emulate real-world scenarios demonstrate that our approach consistently achieves SI suppression down to the noise floor. It also delivers significantly lower computational complexity and training time. These improvements collectively enhance the efficiency and reliability of FD communication systems operating in dynamic wireless environments.","PeriodicalId":100641,"journal":{"name":"IEEE Transactions on Machine Learning in Communications and Networking","volume":"4 ","pages":"60-75"},"PeriodicalIF":0.0,"publicationDate":"2025-12-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11272907","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145729469","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
IEEE Transactions on Machine Learning in Communications and Networking
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1