首页 > 最新文献

IEEE Transactions on Machine Learning in Communications and Networking最新文献

英文 中文
Learning Radio Environments by Differentiable Ray Tracing 通过可变光线跟踪学习无线电环境
Pub Date : 2024-10-04 DOI: 10.1109/TMLCN.2024.3474639
Jakob Hoydis;Fayçal Aït Aoudia;Sebastian Cammerer;Florian Euchner;Merlin Nimier-David;Stephan Ten Brink;Alexander Keller
Ray tracing (RT) is instrumental in 6G research in order to generate spatially-consistent and environment-specific channel impulse responses (CIRs). While acquiring accurate scene geometries is now relatively straightforward, determining material characteristics requires precise calibration using channel measurements. We therefore introduce a novel gradient-based calibration method, complemented by differentiable parametrizations of material properties, scattering and antenna patterns. Our method seamlessly integrates with differentiable ray tracers that enable the computation of derivatives of CIRs with respect to these parameters. Essentially, we approach field computation as a large computational graph wherein parameters are trainable akin to weights of a neural network (NN). We have validated our method using both synthetic data and real-world indoor channel measurements, employing a distributed multiple-input multiple-output (MIMO) channel sounder.
光线跟踪(RT)在 6G 研究中发挥着重要作用,可生成空间一致、环境特定的信道脉冲响应(CIR)。虽然现在获取精确的场景几何图形相对简单,但确定材料特性需要使用通道测量进行精确校准。因此,我们引入了一种新颖的基于梯度的校准方法,并辅以材料特性、散射和天线模式的可微分参数。我们的方法与可微分光线跟踪器无缝集成,可计算 CIR 相对于这些参数的导数。从本质上讲,我们将场计算视为一个大型计算图,其中的参数可训练,类似于神经网络(NN)的权重。我们采用分布式多输入多输出(MIMO)信道探测仪,通过合成数据和实际室内信道测量验证了我们的方法。
{"title":"Learning Radio Environments by Differentiable Ray Tracing","authors":"Jakob Hoydis;Fayçal Aït Aoudia;Sebastian Cammerer;Florian Euchner;Merlin Nimier-David;Stephan Ten Brink;Alexander Keller","doi":"10.1109/TMLCN.2024.3474639","DOIUrl":"https://doi.org/10.1109/TMLCN.2024.3474639","url":null,"abstract":"Ray tracing (RT) is instrumental in 6G research in order to generate spatially-consistent and environment-specific channel impulse responses (CIRs). While acquiring accurate scene geometries is now relatively straightforward, determining material characteristics requires precise calibration using channel measurements. We therefore introduce a novel gradient-based calibration method, complemented by differentiable parametrizations of material properties, scattering and antenna patterns. Our method seamlessly integrates with differentiable ray tracers that enable the computation of derivatives of CIRs with respect to these parameters. Essentially, we approach field computation as a large computational graph wherein parameters are trainable akin to weights of a neural network (NN). We have validated our method using both synthetic data and real-world indoor channel measurements, employing a distributed multiple-input multiple-output (MIMO) channel sounder.","PeriodicalId":100641,"journal":{"name":"IEEE Transactions on Machine Learning in Communications and Networking","volume":"2 ","pages":"1527-1539"},"PeriodicalIF":0.0,"publicationDate":"2024-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10705152","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142442991","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Smart Jamming Attack and Mitigation on Deep Transfer Reinforcement Learning Enabled Resource Allocation for Network Slicing 基于深度传输强化学习的网络切片资源分配的智能干扰攻击与缓解
Pub Date : 2024-09-30 DOI: 10.1109/TMLCN.2024.3470760
Shavbo Salehi;Hao Zhou;Medhat Elsayed;Majid Bavand;Raimundas Gaigalas;Yigit Ozcan;Melike Erol-Kantarci
Network slicing is a pivotal paradigm in wireless networks enabling customized services to users and applications. Yet, intelligent jamming attacks threaten the performance of network slicing. In this paper, we focus on the security aspect of network slicing over a deep transfer reinforcement learning (DTRL) enabled scenario. We first demonstrate how a deep reinforcement learning (DRL)-enabled jamming attack exposes potential risks. In particular, the attacker can intelligently jam resource blocks (RBs) reserved for slices by monitoring transmission signals and perturbing the assigned resources. Then, we propose a DRL-driven mitigation model to mitigate the intelligent attacker. Specifically, the defense mechanism generates interference on unallocated RBs where another antenna is used for transmitting powerful signals. This causes the jammer to consider these RBs as allocated RBs and generate interference for those instead of the allocated RBs. The analysis revealed that the intelligent DRL-enabled jamming attack caused a significant 50% degradation in network throughput and 60% increase in latency in comparison with the no-attack scenario. However, with the implemented mitigation measures, we observed 80% improvement in network throughput and 70% reduction in latency in comparison to the under-attack scenario.
网络切片是无线网络中的一个关键范例,可为用户和应用提供定制服务。然而,智能干扰攻击威胁着网络切片的性能。在本文中,我们重点讨论了在深度传输强化学习(DTRL)支持的场景下网络切片的安全问题。我们首先展示了支持深度强化学习(DRL)的干扰攻击是如何暴露潜在风险的。特别是,攻击者可以通过监控传输信号和扰动分配的资源,智能地干扰为切片预留的资源块(RB)。随后,我们提出了一种 DRL 驱动的缓解模型来缓解智能攻击者。具体来说,防御机制会在未分配的 RB 上产生干扰,在这些 RB 上,另一个天线被用于发射强大的信号。这会导致干扰者将这些 RB 视为已分配的 RB,并对这些 RB 而不是已分配的 RB 产生干扰。分析表明,与无攻击情况相比,启用 DRL 的智能干扰攻击导致网络吞吐量大幅下降 50%,延迟增加 60%。然而,通过实施缓解措施,我们观察到与未受攻击情况相比,网络吞吐量提高了 80%,延迟减少了 70%。
{"title":"Smart Jamming Attack and Mitigation on Deep Transfer Reinforcement Learning Enabled Resource Allocation for Network Slicing","authors":"Shavbo Salehi;Hao Zhou;Medhat Elsayed;Majid Bavand;Raimundas Gaigalas;Yigit Ozcan;Melike Erol-Kantarci","doi":"10.1109/TMLCN.2024.3470760","DOIUrl":"https://doi.org/10.1109/TMLCN.2024.3470760","url":null,"abstract":"Network slicing is a pivotal paradigm in wireless networks enabling customized services to users and applications. Yet, intelligent jamming attacks threaten the performance of network slicing. In this paper, we focus on the security aspect of network slicing over a deep transfer reinforcement learning (DTRL) enabled scenario. We first demonstrate how a deep reinforcement learning (DRL)-enabled jamming attack exposes potential risks. In particular, the attacker can intelligently jam resource blocks (RBs) reserved for slices by monitoring transmission signals and perturbing the assigned resources. Then, we propose a DRL-driven mitigation model to mitigate the intelligent attacker. Specifically, the defense mechanism generates interference on unallocated RBs where another antenna is used for transmitting powerful signals. This causes the jammer to consider these RBs as allocated RBs and generate interference for those instead of the allocated RBs. The analysis revealed that the intelligent DRL-enabled jamming attack caused a significant 50% degradation in network throughput and 60% increase in latency in comparison with the no-attack scenario. However, with the implemented mitigation measures, we observed 80% improvement in network throughput and 70% reduction in latency in comparison to the under-attack scenario.","PeriodicalId":100641,"journal":{"name":"IEEE Transactions on Machine Learning in Communications and Networking","volume":"2 ","pages":"1492-1508"},"PeriodicalIF":0.0,"publicationDate":"2024-09-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10699421","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142397284","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Optimizing Resource Fragmentation in Virtual Network Function Placement Using Deep Reinforcement Learning 利用深度强化学习优化虚拟网络功能布局中的资源碎片
Pub Date : 2024-09-26 DOI: 10.1109/TMLCN.2024.3469131
Ramy Mohamed;Marios Avgeris;Aris Leivadeas;Ioannis Lambadaris
In the 6G wireless era, the strategical deployment of Virtual Network Functions (VNFs) within a network infrastructure that optimizes resource utilization while fulfilling performance criteria is critical for successfully implementing the Network Function Virtualization (NFV) paradigm across the Edge-to-Cloud continuum. This is especially prominent when resource fragmentation –where available resources become isolated and underutilized– becomes an issue due to the frequent reallocations of VNFs. However, traditional optimization methods often struggle to deal with the dynamic and complex nature of the VNF placement problem when fragmentation is considered. This study proposes a novel online VNF placement approach for Edge/Cloud infrastructures that utilizes Deep Reinforcement Learning (DRL) and Reward Constrained Policy Optimization (RCPO) to address this problem. We combine DRL’s adaptability with RCPO’s constraint incorporation capabilities to ensure that the learned policies satisfy the performance and resource constraints while minimizing resource fragmentation. Specifically, the VNF placement problem is first formulated as an offline-constrained optimization problem, and then we devise an online solver using Neural Combinatorial Optimization (NCO). Our method incorporates a metric called Resource Fragmentation Degree (RFD) to quantify fragmentation in the network. Using this metric and RCPO, our NCO agent is trained to make intelligent placement decisions that reduce fragmentation and optimize resource utilization. An error correction heuristic complements the robustness of the proposed framework. Through extensive testing in a simulated environment, the proposed approach is shown to outperform state-of-the-art VNF placement techniques when it comes to minimizing resource fragmentation under constraint satisfaction guarantees.
在 6G 无线时代,在网络基础设施中战略性地部署虚拟网络功能 (VNF),在满足性能标准的同时优化资源利用率,对于在从边缘到云的整个过程中成功实施网络功能虚拟化 (NFV) 范例至关重要。当由于 VNF 的频繁重新分配而导致资源碎片化(可用资源变得孤立和利用不足)成为问题时,这一点就尤为突出。然而,传统的优化方法往往难以应对碎片化情况下 VNF 放置问题的动态性和复杂性。本研究针对边缘/云基础设施提出了一种新型在线 VNF 安置方法,利用深度强化学习(DRL)和奖励约束策略优化(RCPO)来解决这一问题。我们将 DRL 的适应性与 RCPO 的约束整合能力相结合,确保学习到的策略满足性能和资源约束,同时最大限度地减少资源碎片。具体来说,VNF 放置问题首先被表述为离线约束优化问题,然后我们利用神经组合优化(NCO)设计了一个在线求解器。我们的方法采用了一种称为资源碎片度(RFD)的指标来量化网络中的碎片。利用这一指标和 RCPO,我们的 NCO 代理经过训练,可以做出智能化的放置决策,从而减少碎片并优化资源利用率。纠错启发式补充了拟议框架的鲁棒性。通过在模拟环境中进行广泛的测试,证明在保证满足约束条件的前提下最大限度地减少资源碎片方面,所提出的方法优于最先进的 VNF 放置技术。
{"title":"Optimizing Resource Fragmentation in Virtual Network Function Placement Using Deep Reinforcement Learning","authors":"Ramy Mohamed;Marios Avgeris;Aris Leivadeas;Ioannis Lambadaris","doi":"10.1109/TMLCN.2024.3469131","DOIUrl":"https://doi.org/10.1109/TMLCN.2024.3469131","url":null,"abstract":"In the 6G wireless era, the strategical deployment of Virtual Network Functions (VNFs) within a network infrastructure that optimizes resource utilization while fulfilling performance criteria is critical for successfully implementing the Network Function Virtualization (NFV) paradigm across the Edge-to-Cloud continuum. This is especially prominent when resource fragmentation –where available resources become isolated and underutilized– becomes an issue due to the frequent reallocations of VNFs. However, traditional optimization methods often struggle to deal with the dynamic and complex nature of the VNF placement problem when fragmentation is considered. This study proposes a novel online VNF placement approach for Edge/Cloud infrastructures that utilizes Deep Reinforcement Learning (DRL) and Reward Constrained Policy Optimization (RCPO) to address this problem. We combine DRL’s adaptability with RCPO’s constraint incorporation capabilities to ensure that the learned policies satisfy the performance and resource constraints while minimizing resource fragmentation. Specifically, the VNF placement problem is first formulated as an offline-constrained optimization problem, and then we devise an online solver using Neural Combinatorial Optimization (NCO). Our method incorporates a metric called Resource Fragmentation Degree (RFD) to quantify fragmentation in the network. Using this metric and RCPO, our NCO agent is trained to make intelligent placement decisions that reduce fragmentation and optimize resource utilization. An error correction heuristic complements the robustness of the proposed framework. Through extensive testing in a simulated environment, the proposed approach is shown to outperform state-of-the-art VNF placement techniques when it comes to minimizing resource fragmentation under constraint satisfaction guarantees.","PeriodicalId":100641,"journal":{"name":"IEEE Transactions on Machine Learning in Communications and Networking","volume":"2 ","pages":"1475-1491"},"PeriodicalIF":0.0,"publicationDate":"2024-09-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10695455","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142397282","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Removing the Need for Ground Truth UWB Data Collection: Self-Supervised Ranging Error Correction Using Deep Reinforcement Learning 无需收集地面实况 UWB 数据:利用深度强化学习进行自监督测距纠错
Pub Date : 2024-09-26 DOI: 10.1109/TMLCN.2024.3469128
Dieter Coppens;Ben van Herbruggen;Adnan Shahid;Eli de Poorter
Indoor positioning using UWB technology has gained interest due to its centimeter-level accuracy potential. However, multipath effects and non-line-of-sight conditions cause ranging errors between anchors and tags. Existing approaches for mitigating these ranging errors rely on collecting large labeled datasets, making them impractical for real-world deployments. This paper proposes a novel self-supervised deep reinforcement learning approach that does not require labeled ground truth data. A reinforcement learning agent uses the channel impulse response as a state and predicts corrections to minimize the error between corrected and estimated ranges. The agent learns, self-supervised, by iteratively improving corrections that are generated by combining the predictability of trajectories with filtering and smoothening. Experiments on real-world UWB measurements demonstrate comparable performance to state-of-the-art supervised methods, overcoming data dependency and lack of generalizability limitations. This makes self-supervised deep reinforcement learning a promising solution for practical and scalable UWB-ranging error correction.
使用 UWB 技术进行室内定位因其厘米级精度潜力而备受关注。然而,多径效应和非视距条件会导致锚点和标签之间产生测距误差。现有的减小这些测距误差的方法依赖于收集大量标签数据集,这使得它们在现实世界的部署中变得不切实际。本文提出了一种无需标注地面实况数据的新型自监督深度强化学习方法。强化学习代理将信道脉冲响应作为一种状态,并预测修正,以尽量减小修正范围与估计范围之间的误差。该代理通过迭代改进修正来进行自我监督学习,这些修正是通过将轨迹的可预测性与过滤和平滑相结合而生成的。对真实世界 UWB 测量的实验表明,其性能与最先进的监督方法不相上下,克服了数据依赖性和缺乏通用性的限制。这使得自监督深度强化学习成为实用、可扩展的 UWB 范围误差校正的理想解决方案。
{"title":"Removing the Need for Ground Truth UWB Data Collection: Self-Supervised Ranging Error Correction Using Deep Reinforcement Learning","authors":"Dieter Coppens;Ben van Herbruggen;Adnan Shahid;Eli de Poorter","doi":"10.1109/TMLCN.2024.3469128","DOIUrl":"https://doi.org/10.1109/TMLCN.2024.3469128","url":null,"abstract":"Indoor positioning using UWB technology has gained interest due to its centimeter-level accuracy potential. However, multipath effects and non-line-of-sight conditions cause ranging errors between anchors and tags. Existing approaches for mitigating these ranging errors rely on collecting large labeled datasets, making them impractical for real-world deployments. This paper proposes a novel self-supervised deep reinforcement learning approach that does not require labeled ground truth data. A reinforcement learning agent uses the channel impulse response as a state and predicts corrections to minimize the error between corrected and estimated ranges. The agent learns, self-supervised, by iteratively improving corrections that are generated by combining the predictability of trajectories with filtering and smoothening. Experiments on real-world UWB measurements demonstrate comparable performance to state-of-the-art supervised methods, overcoming data dependency and lack of generalizability limitations. This makes self-supervised deep reinforcement learning a promising solution for practical and scalable UWB-ranging error correction.","PeriodicalId":100641,"journal":{"name":"IEEE Transactions on Machine Learning in Communications and Networking","volume":"2 ","pages":"1615-1627"},"PeriodicalIF":0.0,"publicationDate":"2024-09-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10695458","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142565484","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Decentralized Grant-Free mMTC Traffic Multiplexing With eMBB Data Through Deep Reinforcement Learning 通过深度强化学习利用 eMBB 数据实现分散式无补助 mMTC 流量复用
Pub Date : 2024-09-24 DOI: 10.1109/TMLCN.2024.3467044
Giovanni Di Gennaro;Amedeo Buonanno;Gianmarco Romano;Stefano Buzzi;Francesco A. N. Palmieri
This paper addresses the problem of joint multiplexing of enhanced Mobile Broadband (eMBB) and massive Machine-Type Communications (mMTC) traffic in the same uplink time-frequency RG. Given the challenge posed by a potentially large number of users, it is essential to focus on a multiple access strategy that leverages artificial intelligence to adapt to specific channel conditions. An mMTC agent is developed through a Deep Reinforcement Learning (DRL) methodology for generating grant-free frequency hopping traffic in a decentralized manner, assuming the presence of underlying eMBB traffic dynamics. Within this DRL framework, a methodical comparison between two possible deep neural networks is conducted, using different generative models employed to ascertain their intrinsic capabilities in various application scenarios. The analysis conducted reveals that the Long Short-Term Memory network is particularly suitable for the required task, demonstrating a robustness that is consistently very close to potential upper-bounds, despite the latter requiring complete knowledge of the underlying statistics.
本文探讨了在同一上行链路时频 RG 中联合复用增强型移动宽带(eMBB)和大规模机器型通信(mMTC)流量的问题。鉴于潜在的大量用户所带来的挑战,必须重点关注利用人工智能适应特定信道条件的多路接入策略。我们通过深度强化学习(DRL)方法开发了一种 mMTC 代理,用于以分散方式生成免授权跳频流量,同时假设存在潜在的 eMBB 流量动态。在此 DRL 框架内,对两种可能的深度神经网络进行了有条不紊的比较,使用不同的生成模型来确定它们在各种应用场景中的内在能力。分析结果表明,长短期记忆网络特别适合所需的任务,尽管后者需要完全了解底层统计数据,但其鲁棒性始终非常接近潜在上限。
{"title":"Decentralized Grant-Free mMTC Traffic Multiplexing With eMBB Data Through Deep Reinforcement Learning","authors":"Giovanni Di Gennaro;Amedeo Buonanno;Gianmarco Romano;Stefano Buzzi;Francesco A. N. Palmieri","doi":"10.1109/TMLCN.2024.3467044","DOIUrl":"https://doi.org/10.1109/TMLCN.2024.3467044","url":null,"abstract":"This paper addresses the problem of joint multiplexing of enhanced Mobile Broadband (eMBB) and massive Machine-Type Communications (mMTC) traffic in the same uplink time-frequency RG. Given the challenge posed by a potentially large number of users, it is essential to focus on a multiple access strategy that leverages artificial intelligence to adapt to specific channel conditions. An mMTC agent is developed through a Deep Reinforcement Learning (DRL) methodology for generating grant-free frequency hopping traffic in a decentralized manner, assuming the presence of underlying eMBB traffic dynamics. Within this DRL framework, a methodical comparison between two possible deep neural networks is conducted, using different generative models employed to ascertain their intrinsic capabilities in various application scenarios. The analysis conducted reveals that the Long Short-Term Memory network is particularly suitable for the required task, demonstrating a robustness that is consistently very close to potential upper-bounds, despite the latter requiring complete knowledge of the underlying statistics.","PeriodicalId":100641,"journal":{"name":"IEEE Transactions on Machine Learning in Communications and Networking","volume":"2 ","pages":"1440-1455"},"PeriodicalIF":0.0,"publicationDate":"2024-09-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10689612","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142368315","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Biased Backpressure Routing Using Link Features and Graph Neural Networks 利用链路特征和图神经网络进行有偏向的反压路由选择
Pub Date : 2024-09-16 DOI: 10.1109/TMLCN.2024.3461711
Zhongyuan Zhao;Bojan Radojičić;Gunjan Verma;Ananthram Swami;Santiago Segarra
To reduce the latency of Backpressure (BP) routing in wireless multi-hop networks, we propose to enhance the existing shortest path-biased BP (SP-BP) and sojourn time-based backlog metrics, since they introduce no additional time step-wise signaling overhead to the basic BP. Rather than relying on hop-distance, we introduce a new edge-weighted shortest path bias built on the scheduling duty cycle of wireless links, which can be predicted by a graph convolutional neural network based on the topology and traffic of wireless networks. Additionally, we tackle three long-standing challenges associated with SP-BP: optimal bias scaling, efficient bias maintenance, and integration of delay awareness. Our proposed solutions inherit the throughput optimality of the basic BP, as well as its practical advantages of low complexity and fully distributed implementation. Our approaches rely on common link features and introduces only a one-time constant overhead to previous SP-BP schemes, or a one-time overhead linear in the network size to the basic BP. Numerical experiments show that our solutions can effectively address the major drawbacks of slow startup, random walk, and the last packet problem in basic BP, improving the end-to-end delay of existing low-overhead BP algorithms under various settings of network traffic, interference, and mobility.
为了减少无线多跳网络中的Backpressure(BP)路由延迟,我们建议改进现有的基于最短路径的BP(SP-BP)和基于停留时间的积压指标,因为它们不会给基本BP带来额外的时间步进信号开销。我们不依赖于跳距,而是根据无线链路的调度占空比引入了一种新的边缘加权最短路径偏置,这种偏置可通过基于无线网络拓扑和流量的图卷积神经网络进行预测。此外,我们还解决了与 SP-BP 相关的三个长期难题:最优偏置缩放、高效偏置维护和延迟感知集成。我们提出的解决方案继承了基本 BP 的吞吐量最优性,以及低复杂性和完全分布式实施的实际优势。我们的方法依赖于常见的链路特征,与以前的 SP-BP 方案相比,只引入了一次性常量开销,与基本 BP 相比,只引入了与网络规模成线性关系的一次性开销。数值实验表明,我们的方案能有效解决基本 BP 的启动慢、随机漫步和最后一个数据包问题等主要缺点,在各种网络流量、干扰和移动性设置下改善现有低开销 BP 算法的端到端延迟。
{"title":"Biased Backpressure Routing Using Link Features and Graph Neural Networks","authors":"Zhongyuan Zhao;Bojan Radojičić;Gunjan Verma;Ananthram Swami;Santiago Segarra","doi":"10.1109/TMLCN.2024.3461711","DOIUrl":"https://doi.org/10.1109/TMLCN.2024.3461711","url":null,"abstract":"To reduce the latency of Backpressure (BP) routing in wireless multi-hop networks, we propose to enhance the existing shortest path-biased BP (SP-BP) and sojourn time-based backlog metrics, since they introduce no additional time step-wise signaling overhead to the basic BP. Rather than relying on hop-distance, we introduce a new edge-weighted shortest path bias built on the scheduling duty cycle of wireless links, which can be predicted by a graph convolutional neural network based on the topology and traffic of wireless networks. Additionally, we tackle three long-standing challenges associated with SP-BP: optimal bias scaling, efficient bias maintenance, and integration of delay awareness. Our proposed solutions inherit the throughput optimality of the basic BP, as well as its practical advantages of low complexity and fully distributed implementation. Our approaches rely on common link features and introduces only a one-time constant overhead to previous SP-BP schemes, or a one-time overhead linear in the network size to the basic BP. Numerical experiments show that our solutions can effectively address the major drawbacks of slow startup, random walk, and the last packet problem in basic BP, improving the end-to-end delay of existing low-overhead BP algorithms under various settings of network traffic, interference, and mobility.","PeriodicalId":100641,"journal":{"name":"IEEE Transactions on Machine Learning in Communications and Networking","volume":"2 ","pages":"1424-1439"},"PeriodicalIF":0.0,"publicationDate":"2024-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10681132","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142368430","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Anticipating Optical Availability in Hybrid RF/FSO Links Using RF Beacons and Deep Learning 利用射频信标和深度学习预测射频/FSO 混合链路中的光学可用性
Pub Date : 2024-09-10 DOI: 10.1109/TMLCN.2024.3457490
Mostafa Ibrahim;Arsalan Ahmad;Sabit Ekin;Peter LoPresti;Serhat Altunc;Obadiah Kegege;John F. O'Hara
Radiofrequency (RF) communications offer reliable but low data rates and energy-inefficient satellite links, while free-space optical (FSO) promises high bandwidth but struggles with disturbances imposed by atmospheric effects. A hybrid RF/FSO architecture aims to achieve optimal reliability along with high data rates for space communications. Accurate prediction of dynamic ground-to-satellite FSO link availability is critical for routing decisions in low-earth orbit constellations. In this paper, we propose a system leveraging ubiquitous RF links to proactively forecast FSO link degradation prior to signal drops below threshold levels. This enables pre-calculation of rerouting to maximally maintain high data rate FSO links throughout the duration of weather effects. We implement a supervised learning model to anticipate FSO attenuation based on the analysis of RF patterns. Through the simulation of a dense lower earth orbit (LEO) satellite constellation, we demonstrate the efficacy of our approach in a simulated satellite network, highlighting the balance between predictive accuracy and prediction duration. An emulated cloud attenuation model is proposed to provide insight into the temporal profiles of RF signals and their correlation to FSO channel dynamics. Our investigation sheds light on the trade-offs between prediction horizon and accuracy arising from RF beacon numbers and proximity.
射频(RF)通信提供可靠但低数据传输率和低能效的卫星链路,而自由空间光学(FSO)承诺提供高带宽,但在大气效应的干扰下举步维艰。射频/FSO 混合架构旨在实现空间通信的最佳可靠性和高数据传输率。准确预测地面到卫星 FSO 链路的动态可用性对于低地轨道星座的路由决策至关重要。在本文中,我们提出了一个系统,利用无处不在的射频链路,在信号降到阈值水平以下之前主动预测 FSO 链路的衰减。这样就能预先计算重新路由,在整个天气影响期间最大限度地保持高数据速率 FSO 链路。我们实施了一个监督学习模型,根据对射频模式的分析来预测 FSO 衰减。通过模拟密集的低地球轨道 (LEO) 卫星群,我们展示了我们的方法在模拟卫星网络中的功效,强调了预测准确性和预测持续时间之间的平衡。我们提出了一个模拟云衰减模型,以便深入了解射频信号的时间轮廓及其与 FSO 信道动态的相关性。我们的研究揭示了射频信标数量和邻近性在预测期限和准确性之间的权衡。
{"title":"Anticipating Optical Availability in Hybrid RF/FSO Links Using RF Beacons and Deep Learning","authors":"Mostafa Ibrahim;Arsalan Ahmad;Sabit Ekin;Peter LoPresti;Serhat Altunc;Obadiah Kegege;John F. O'Hara","doi":"10.1109/TMLCN.2024.3457490","DOIUrl":"https://doi.org/10.1109/TMLCN.2024.3457490","url":null,"abstract":"Radiofrequency (RF) communications offer reliable but low data rates and energy-inefficient satellite links, while free-space optical (FSO) promises high bandwidth but struggles with disturbances imposed by atmospheric effects. A hybrid RF/FSO architecture aims to achieve optimal reliability along with high data rates for space communications. Accurate prediction of dynamic ground-to-satellite FSO link availability is critical for routing decisions in low-earth orbit constellations. In this paper, we propose a system leveraging ubiquitous RF links to proactively forecast FSO link degradation prior to signal drops below threshold levels. This enables pre-calculation of rerouting to maximally maintain high data rate FSO links throughout the duration of weather effects. We implement a supervised learning model to anticipate FSO attenuation based on the analysis of RF patterns. Through the simulation of a dense lower earth orbit (LEO) satellite constellation, we demonstrate the efficacy of our approach in a simulated satellite network, highlighting the balance between predictive accuracy and prediction duration. An emulated cloud attenuation model is proposed to provide insight into the temporal profiles of RF signals and their correlation to FSO channel dynamics. Our investigation sheds light on the trade-offs between prediction horizon and accuracy arising from RF beacon numbers and proximity.","PeriodicalId":100641,"journal":{"name":"IEEE Transactions on Machine Learning in Communications and Networking","volume":"2 ","pages":"1369-1388"},"PeriodicalIF":0.0,"publicationDate":"2024-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10672517","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142246523","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
RSMA-Enabled Interference Management for Industrial Internet of Things Networks With Finite Blocklength Coding and Hardware Impairments 利用有限块长编码和硬件损伤为工业物联网网络提供 RSMA 支持的干扰管理
Pub Date : 2024-09-05 DOI: 10.1109/TMLCN.2024.3455268
Nahed Belhadj Mohamed;Md. Zoheb Hassan;Georges Kaddoum
The increasing proliferation of industrial internet of things (IIoT) devices requires the development of efficient radio resource allocation techniques to optimize spectrum utilization. In densely populated IIoT networks, the interference that results from simultaneously scheduling multiple IIoT devices over the same radio resource blocks (RRBs) severely degrades a network’s achievable capacity. This paper investigates an interference management problem for IIoT networks that considers both finite blocklength (FBL)-coded transmission and signal distortions induced by hardware impairments (HWIs) arising from practical, low-complexity radio-frequency front ends. We use the rate-splitting multiple access (RSMA) scheme to effectively schedule multiple IIoT devices in a cluster over the same RRB(s). To enhance the system’s achievable capacity, a joint clustering and transmit power allocation (PA) problem is formulated. To tackle the optimization problem’s inherent computational intractability due to its non-convex structure, a two-step distributed clustering and power management (DCPM) framework is proposed. First, the DCPM framework obtains a set of clustered devices for each access point by employing a greedy clustering algorithm while maximizing the clustered devices’ signal-to-interference-plus-noise ratio. Then, the DCPM framework employs a multi-agent deep reinforcement learning (DRL) framework to optimize transmit PA among the clustered devices. The proposed DRL algorithm learns a suitable transmit PA policy that does not require precise information about instantaneous signal distortions. Our simulation results demonstrate that our proposed DCPM framework adapts seamlessly to varying channel conditions and outperforms several benchmark schemes with and without HWI-induced signal distortions.
随着工业物联网(IIoT)设备的日益增多,需要开发高效的无线电资源分配技术来优化频谱利用率。在人口稠密的物联网网络中,将多个物联网设备同时调度到相同的无线电资源块(RRB)上所产生的干扰会严重降低网络的可实现容量。本文研究了 IIoT 网络的干扰管理问题,该问题既考虑了有限块长(FBL)编码传输,也考虑了由实用的低复杂度射频前端产生的硬件损伤(HWIs)引起的信号失真。我们使用速率分割多路访问(RSMA)方案,通过相同的 RRB 有效调度集群中的多个物联网设备。为了提高系统的可实现容量,我们提出了一个联合聚类和发射功率分配(PA)问题。为了解决优化问题因其非凸性结构而造成的固有计算难点,我们提出了一个分两步进行的分布式聚类和功率管理(DCPM)框架。首先,DCPM 框架采用贪婪聚类算法为每个接入点获取一组聚类设备,同时最大化聚类设备的信号干扰加噪声比。然后,DCPM 框架采用多代理深度强化学习(DRL)框架来优化聚类设备之间的发送功率放大器。所提出的 DRL 算法可学习合适的发送功率策略,而无需精确的瞬时信号失真信息。我们的仿真结果表明,我们提出的 DCPM 框架能无缝适应不同的信道条件,在有 HWI 引起的信号失真和没有 HWI 引起的信号失真的情况下,其性能都优于几种基准方案。
{"title":"RSMA-Enabled Interference Management for Industrial Internet of Things Networks With Finite Blocklength Coding and Hardware Impairments","authors":"Nahed Belhadj Mohamed;Md. Zoheb Hassan;Georges Kaddoum","doi":"10.1109/TMLCN.2024.3455268","DOIUrl":"https://doi.org/10.1109/TMLCN.2024.3455268","url":null,"abstract":"The increasing proliferation of industrial internet of things (IIoT) devices requires the development of efficient radio resource allocation techniques to optimize spectrum utilization. In densely populated IIoT networks, the interference that results from simultaneously scheduling multiple IIoT devices over the same radio resource blocks (RRBs) severely degrades a network’s achievable capacity. This paper investigates an interference management problem for IIoT networks that considers both finite blocklength (FBL)-coded transmission and signal distortions induced by hardware impairments (HWIs) arising from practical, low-complexity radio-frequency front ends. We use the rate-splitting multiple access (RSMA) scheme to effectively schedule multiple IIoT devices in a cluster over the same RRB(s). To enhance the system’s achievable capacity, a joint clustering and transmit power allocation (PA) problem is formulated. To tackle the optimization problem’s inherent computational intractability due to its non-convex structure, a two-step distributed clustering and power management (DCPM) framework is proposed. First, the DCPM framework obtains a set of clustered devices for each access point by employing a greedy clustering algorithm while maximizing the clustered devices’ signal-to-interference-plus-noise ratio. Then, the DCPM framework employs a multi-agent deep reinforcement learning (DRL) framework to optimize transmit PA among the clustered devices. The proposed DRL algorithm learns a suitable transmit PA policy that does not require precise information about instantaneous signal distortions. Our simulation results demonstrate that our proposed DCPM framework adapts seamlessly to varying channel conditions and outperforms several benchmark schemes with and without HWI-induced signal distortions.","PeriodicalId":100641,"journal":{"name":"IEEE Transactions on Machine Learning in Communications and Networking","volume":"2 ","pages":"1319-1340"},"PeriodicalIF":0.0,"publicationDate":"2024-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10666756","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142246544","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Channel Path Loss Prediction Using Satellite Images: A Deep Learning Approach 利用卫星图像预测信道路径损耗:深度学习方法
Pub Date : 2024-09-03 DOI: 10.1109/TMLCN.2024.3454019
Chenlong Wang;Bo Ai;Ruisi He;Mi Yang;Shun Zhou;Long Yu;Yuxin Zhang;Zhicheng Qiu;Zhangdui Zhong;Jianhua Fan
With the advancement of communication technology, there is a higher demand for high-precision and high-generalization channel path loss models as it is fundamental to communication systems. For traditional stochastic and deterministic models, it is difficult to strike a balance between prediction accuracy and generalizability. This paper proposes a novel deep learning-based path loss prediction model using satellite images. In order to efficiently extract environment features from satellite images, residual structure, attention mechanism, and spatial pyramid pooling layer are developed in the network based on expert knowledge. Using a convolutional network activation visualization method, the interpretability of the proposed model is improved. Finally, the proposed model achieves a prediction accuracy with a root mean square error of 5.05 dB, demonstrating an improvement of 3.07 dB over a reference empirical propagation model.
随着通信技术的发展,人们对高精度和高泛化的信道路径损耗模型提出了更高的要求,因为它是通信系统的基础。对于传统的随机和确定性模型,很难在预测精度和泛化能力之间取得平衡。本文利用卫星图像提出了一种基于深度学习的新型路径损耗预测模型。为了有效地从卫星图像中提取环境特征,基于专家知识在网络中开发了残差结构、注意机制和空间金字塔池化层。利用卷积网络激活可视化方法,提高了所提模型的可解释性。最后,提出的模型达到了预测精度,均方根误差为 5.05 dB,比参考经验传播模型提高了 3.07 dB。
{"title":"Channel Path Loss Prediction Using Satellite Images: A Deep Learning Approach","authors":"Chenlong Wang;Bo Ai;Ruisi He;Mi Yang;Shun Zhou;Long Yu;Yuxin Zhang;Zhicheng Qiu;Zhangdui Zhong;Jianhua Fan","doi":"10.1109/TMLCN.2024.3454019","DOIUrl":"https://doi.org/10.1109/TMLCN.2024.3454019","url":null,"abstract":"With the advancement of communication technology, there is a higher demand for high-precision and high-generalization channel path loss models as it is fundamental to communication systems. For traditional stochastic and deterministic models, it is difficult to strike a balance between prediction accuracy and generalizability. This paper proposes a novel deep learning-based path loss prediction model using satellite images. In order to efficiently extract environment features from satellite images, residual structure, attention mechanism, and spatial pyramid pooling layer are developed in the network based on expert knowledge. Using a convolutional network activation visualization method, the interpretability of the proposed model is improved. Finally, the proposed model achieves a prediction accuracy with a root mean square error of 5.05 dB, demonstrating an improvement of 3.07 dB over a reference empirical propagation model.","PeriodicalId":100641,"journal":{"name":"IEEE Transactions on Machine Learning in Communications and Networking","volume":"2 ","pages":"1357-1368"},"PeriodicalIF":0.0,"publicationDate":"2024-09-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10663692","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142246437","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Energy Minimization for Federated Learning Based Radio Map Construction 基于联合学习的无线电地图构建的能量最小化
Pub Date : 2024-09-02 DOI: 10.1109/TMLCN.2024.3453212
Fahui Wu;Yunfei Gao;Lin Xiao;Dingcheng Yang;Jiangbin Lyu
This paper studies an unmanned aerial vehicle (UAV)-enabled communication network, in which the UAV acts as an air relay serving multiple ground users (GUs) to jointly construct an accurate radio map or channel knowledge maps (CKM) through a federated learning (FL) algorithm. Radio map or CKM is a site-specific database that contains detailed channel-related information for specific locations. This information includes channel power gains, shadowing, interference, and angles of arrival (AoA) and departure (AoD), all of which are crucial for enabling environment-aware wireless communications. Because the wireless communication network has limited resource blocks (RBs), only a subset of users can be selected to transmit the model parameters at each iteration. Since the FL training process requires multiple transmission model parameters, the energy limitation of the wireless device will seriously affect the quality of the FL result. In this sense, the energy consumption and resource allocation have a significance to the final FL training result. We formulate an optimization problem by jointly considering user selection, wireless resource allocation, and UAV deployment, with the goal of minimizing the computation energy and wireless transmission energy. To solve the problem, we first propose a probabilistic user selection mechanism to reduce the total number of FL iterations, whereby the users who have a larger impact on the global model in each iteration are more likely to be selected. Then the convex optimization technique is utilized to optimize bandwidth allocation. Furthermore, to further save communication transmission energy, we use deep reinforcement learning (DRL) to optimize the deployment location of the UAV. The DRL-based method enables the UAV to learn from its interaction with the environment and ascertain the most energy-efficient deployment locations through an evaluation of energy consumption during the training process. Finally, the simulation results show that our proposed algorithm can reduce the total energy consumption by nearly 38%, compared to the standard FL algorithm.
本文研究了一种支持无人机(UAV)的通信网络,其中无人机充当空中中继器,为多个地面用户(GU)提供服务,通过联合学习(FL)算法共同构建精确的无线电地图或信道知识地图(CKM)。无线电地图或信道知识地图是一个特定地点的数据库,其中包含特定地点的详细信道相关信息。这些信息包括信道功率增益、阴影、干扰、到达角(AoA)和离开角(AoD),所有这些对于实现环境感知无线通信都至关重要。由于无线通信网络的资源块(RB)有限,因此每次迭代只能选择一个用户子集来传输模型参数。由于 FL 训练过程需要多次传输模型参数,无线设备的能量限制将严重影响 FL 结果的质量。从这个意义上说,能量消耗和资源分配对最终的 FL 训练结果具有重要意义。我们将用户选择、无线资源分配和无人机部署联合考虑,提出了一个优化问题,目标是使计算能量和无线传输能量最小。为了解决这个问题,我们首先提出了一种概率用户选择机制来减少 FL 的总迭代次数,即在每次迭代中对全局模型影响较大的用户更有可能被选中。然后利用凸优化技术优化带宽分配。此外,为了进一步节省通信传输能量,我们使用深度强化学习(DRL)来优化无人机的部署位置。基于 DRL 的方法使无人机能够从与环境的交互中学习,并通过评估训练过程中的能耗来确定最节能的部署位置。最后,仿真结果表明,与标准 FL 算法相比,我们提出的算法可将总能耗降低近 38%。
{"title":"Energy Minimization for Federated Learning Based Radio Map Construction","authors":"Fahui Wu;Yunfei Gao;Lin Xiao;Dingcheng Yang;Jiangbin Lyu","doi":"10.1109/TMLCN.2024.3453212","DOIUrl":"https://doi.org/10.1109/TMLCN.2024.3453212","url":null,"abstract":"This paper studies an unmanned aerial vehicle (UAV)-enabled communication network, in which the UAV acts as an air relay serving multiple ground users (GUs) to jointly construct an accurate radio map or channel knowledge maps (CKM) through a federated learning (FL) algorithm. Radio map or CKM is a site-specific database that contains detailed channel-related information for specific locations. This information includes channel power gains, shadowing, interference, and angles of arrival (AoA) and departure (AoD), all of which are crucial for enabling environment-aware wireless communications. Because the wireless communication network has limited resource blocks (RBs), only a subset of users can be selected to transmit the model parameters at each iteration. Since the FL training process requires multiple transmission model parameters, the energy limitation of the wireless device will seriously affect the quality of the FL result. In this sense, the energy consumption and resource allocation have a significance to the final FL training result. We formulate an optimization problem by jointly considering user selection, wireless resource allocation, and UAV deployment, with the goal of minimizing the computation energy and wireless transmission energy. To solve the problem, we first propose a probabilistic user selection mechanism to reduce the total number of FL iterations, whereby the users who have a larger impact on the global model in each iteration are more likely to be selected. Then the convex optimization technique is utilized to optimize bandwidth allocation. Furthermore, to further save communication transmission energy, we use deep reinforcement learning (DRL) to optimize the deployment location of the UAV. The DRL-based method enables the UAV to learn from its interaction with the environment and ascertain the most energy-efficient deployment locations through an evaluation of energy consumption during the training process. Finally, the simulation results show that our proposed algorithm can reduce the total energy consumption by nearly 38%, compared to the standard FL algorithm.","PeriodicalId":100641,"journal":{"name":"IEEE Transactions on Machine Learning in Communications and Networking","volume":"2 ","pages":"1248-1264"},"PeriodicalIF":0.0,"publicationDate":"2024-09-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10662910","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142169609","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
IEEE Transactions on Machine Learning in Communications and Networking
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1