首页 > 最新文献

IEEE Transactions on Machine Learning in Communications and Networking最新文献

英文 中文
Removing the Need for Ground Truth UWB Data Collection: Self-Supervised Ranging Error Correction Using Deep Reinforcement Learning 无需收集地面实况 UWB 数据:利用深度强化学习进行自监督测距纠错
Pub Date : 2024-09-26 DOI: 10.1109/TMLCN.2024.3469128
Dieter Coppens;Ben van Herbruggen;Adnan Shahid;Eli de Poorter
Indoor positioning using UWB technology has gained interest due to its centimeter-level accuracy potential. However, multipath effects and non-line-of-sight conditions cause ranging errors between anchors and tags. Existing approaches for mitigating these ranging errors rely on collecting large labeled datasets, making them impractical for real-world deployments. This paper proposes a novel self-supervised deep reinforcement learning approach that does not require labeled ground truth data. A reinforcement learning agent uses the channel impulse response as a state and predicts corrections to minimize the error between corrected and estimated ranges. The agent learns, self-supervised, by iteratively improving corrections that are generated by combining the predictability of trajectories with filtering and smoothening. Experiments on real-world UWB measurements demonstrate comparable performance to state-of-the-art supervised methods, overcoming data dependency and lack of generalizability limitations. This makes self-supervised deep reinforcement learning a promising solution for practical and scalable UWB-ranging error correction.
使用 UWB 技术进行室内定位因其厘米级精度潜力而备受关注。然而,多径效应和非视距条件会导致锚点和标签之间产生测距误差。现有的减小这些测距误差的方法依赖于收集大量标签数据集,这使得它们在现实世界的部署中变得不切实际。本文提出了一种无需标注地面实况数据的新型自监督深度强化学习方法。强化学习代理将信道脉冲响应作为一种状态,并预测修正,以尽量减小修正范围与估计范围之间的误差。该代理通过迭代改进修正来进行自我监督学习,这些修正是通过将轨迹的可预测性与过滤和平滑相结合而生成的。对真实世界 UWB 测量的实验表明,其性能与最先进的监督方法不相上下,克服了数据依赖性和缺乏通用性的限制。这使得自监督深度强化学习成为实用、可扩展的 UWB 范围误差校正的理想解决方案。
{"title":"Removing the Need for Ground Truth UWB Data Collection: Self-Supervised Ranging Error Correction Using Deep Reinforcement Learning","authors":"Dieter Coppens;Ben van Herbruggen;Adnan Shahid;Eli de Poorter","doi":"10.1109/TMLCN.2024.3469128","DOIUrl":"https://doi.org/10.1109/TMLCN.2024.3469128","url":null,"abstract":"Indoor positioning using UWB technology has gained interest due to its centimeter-level accuracy potential. However, multipath effects and non-line-of-sight conditions cause ranging errors between anchors and tags. Existing approaches for mitigating these ranging errors rely on collecting large labeled datasets, making them impractical for real-world deployments. This paper proposes a novel self-supervised deep reinforcement learning approach that does not require labeled ground truth data. A reinforcement learning agent uses the channel impulse response as a state and predicts corrections to minimize the error between corrected and estimated ranges. The agent learns, self-supervised, by iteratively improving corrections that are generated by combining the predictability of trajectories with filtering and smoothening. Experiments on real-world UWB measurements demonstrate comparable performance to state-of-the-art supervised methods, overcoming data dependency and lack of generalizability limitations. This makes self-supervised deep reinforcement learning a promising solution for practical and scalable UWB-ranging error correction.","PeriodicalId":100641,"journal":{"name":"IEEE Transactions on Machine Learning in Communications and Networking","volume":"2 ","pages":"1615-1627"},"PeriodicalIF":0.0,"publicationDate":"2024-09-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10695458","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142565484","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Decentralized Grant-Free mMTC Traffic Multiplexing With eMBB Data Through Deep Reinforcement Learning 通过深度强化学习利用 eMBB 数据实现分散式无补助 mMTC 流量复用
Pub Date : 2024-09-24 DOI: 10.1109/TMLCN.2024.3467044
Giovanni Di Gennaro;Amedeo Buonanno;Gianmarco Romano;Stefano Buzzi;Francesco A. N. Palmieri
This paper addresses the problem of joint multiplexing of enhanced Mobile Broadband (eMBB) and massive Machine-Type Communications (mMTC) traffic in the same uplink time-frequency RG. Given the challenge posed by a potentially large number of users, it is essential to focus on a multiple access strategy that leverages artificial intelligence to adapt to specific channel conditions. An mMTC agent is developed through a Deep Reinforcement Learning (DRL) methodology for generating grant-free frequency hopping traffic in a decentralized manner, assuming the presence of underlying eMBB traffic dynamics. Within this DRL framework, a methodical comparison between two possible deep neural networks is conducted, using different generative models employed to ascertain their intrinsic capabilities in various application scenarios. The analysis conducted reveals that the Long Short-Term Memory network is particularly suitable for the required task, demonstrating a robustness that is consistently very close to potential upper-bounds, despite the latter requiring complete knowledge of the underlying statistics.
本文探讨了在同一上行链路时频 RG 中联合复用增强型移动宽带(eMBB)和大规模机器型通信(mMTC)流量的问题。鉴于潜在的大量用户所带来的挑战,必须重点关注利用人工智能适应特定信道条件的多路接入策略。我们通过深度强化学习(DRL)方法开发了一种 mMTC 代理,用于以分散方式生成免授权跳频流量,同时假设存在潜在的 eMBB 流量动态。在此 DRL 框架内,对两种可能的深度神经网络进行了有条不紊的比较,使用不同的生成模型来确定它们在各种应用场景中的内在能力。分析结果表明,长短期记忆网络特别适合所需的任务,尽管后者需要完全了解底层统计数据,但其鲁棒性始终非常接近潜在上限。
{"title":"Decentralized Grant-Free mMTC Traffic Multiplexing With eMBB Data Through Deep Reinforcement Learning","authors":"Giovanni Di Gennaro;Amedeo Buonanno;Gianmarco Romano;Stefano Buzzi;Francesco A. N. Palmieri","doi":"10.1109/TMLCN.2024.3467044","DOIUrl":"https://doi.org/10.1109/TMLCN.2024.3467044","url":null,"abstract":"This paper addresses the problem of joint multiplexing of enhanced Mobile Broadband (eMBB) and massive Machine-Type Communications (mMTC) traffic in the same uplink time-frequency RG. Given the challenge posed by a potentially large number of users, it is essential to focus on a multiple access strategy that leverages artificial intelligence to adapt to specific channel conditions. An mMTC agent is developed through a Deep Reinforcement Learning (DRL) methodology for generating grant-free frequency hopping traffic in a decentralized manner, assuming the presence of underlying eMBB traffic dynamics. Within this DRL framework, a methodical comparison between two possible deep neural networks is conducted, using different generative models employed to ascertain their intrinsic capabilities in various application scenarios. The analysis conducted reveals that the Long Short-Term Memory network is particularly suitable for the required task, demonstrating a robustness that is consistently very close to potential upper-bounds, despite the latter requiring complete knowledge of the underlying statistics.","PeriodicalId":100641,"journal":{"name":"IEEE Transactions on Machine Learning in Communications and Networking","volume":"2 ","pages":"1440-1455"},"PeriodicalIF":0.0,"publicationDate":"2024-09-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10689612","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142368315","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Biased Backpressure Routing Using Link Features and Graph Neural Networks 利用链路特征和图神经网络进行有偏向的反压路由选择
Pub Date : 2024-09-16 DOI: 10.1109/TMLCN.2024.3461711
Zhongyuan Zhao;Bojan Radojičić;Gunjan Verma;Ananthram Swami;Santiago Segarra
To reduce the latency of Backpressure (BP) routing in wireless multi-hop networks, we propose to enhance the existing shortest path-biased BP (SP-BP) and sojourn time-based backlog metrics, since they introduce no additional time step-wise signaling overhead to the basic BP. Rather than relying on hop-distance, we introduce a new edge-weighted shortest path bias built on the scheduling duty cycle of wireless links, which can be predicted by a graph convolutional neural network based on the topology and traffic of wireless networks. Additionally, we tackle three long-standing challenges associated with SP-BP: optimal bias scaling, efficient bias maintenance, and integration of delay awareness. Our proposed solutions inherit the throughput optimality of the basic BP, as well as its practical advantages of low complexity and fully distributed implementation. Our approaches rely on common link features and introduces only a one-time constant overhead to previous SP-BP schemes, or a one-time overhead linear in the network size to the basic BP. Numerical experiments show that our solutions can effectively address the major drawbacks of slow startup, random walk, and the last packet problem in basic BP, improving the end-to-end delay of existing low-overhead BP algorithms under various settings of network traffic, interference, and mobility.
为了减少无线多跳网络中的Backpressure(BP)路由延迟,我们建议改进现有的基于最短路径的BP(SP-BP)和基于停留时间的积压指标,因为它们不会给基本BP带来额外的时间步进信号开销。我们不依赖于跳距,而是根据无线链路的调度占空比引入了一种新的边缘加权最短路径偏置,这种偏置可通过基于无线网络拓扑和流量的图卷积神经网络进行预测。此外,我们还解决了与 SP-BP 相关的三个长期难题:最优偏置缩放、高效偏置维护和延迟感知集成。我们提出的解决方案继承了基本 BP 的吞吐量最优性,以及低复杂性和完全分布式实施的实际优势。我们的方法依赖于常见的链路特征,与以前的 SP-BP 方案相比,只引入了一次性常量开销,与基本 BP 相比,只引入了与网络规模成线性关系的一次性开销。数值实验表明,我们的方案能有效解决基本 BP 的启动慢、随机漫步和最后一个数据包问题等主要缺点,在各种网络流量、干扰和移动性设置下改善现有低开销 BP 算法的端到端延迟。
{"title":"Biased Backpressure Routing Using Link Features and Graph Neural Networks","authors":"Zhongyuan Zhao;Bojan Radojičić;Gunjan Verma;Ananthram Swami;Santiago Segarra","doi":"10.1109/TMLCN.2024.3461711","DOIUrl":"https://doi.org/10.1109/TMLCN.2024.3461711","url":null,"abstract":"To reduce the latency of Backpressure (BP) routing in wireless multi-hop networks, we propose to enhance the existing shortest path-biased BP (SP-BP) and sojourn time-based backlog metrics, since they introduce no additional time step-wise signaling overhead to the basic BP. Rather than relying on hop-distance, we introduce a new edge-weighted shortest path bias built on the scheduling duty cycle of wireless links, which can be predicted by a graph convolutional neural network based on the topology and traffic of wireless networks. Additionally, we tackle three long-standing challenges associated with SP-BP: optimal bias scaling, efficient bias maintenance, and integration of delay awareness. Our proposed solutions inherit the throughput optimality of the basic BP, as well as its practical advantages of low complexity and fully distributed implementation. Our approaches rely on common link features and introduces only a one-time constant overhead to previous SP-BP schemes, or a one-time overhead linear in the network size to the basic BP. Numerical experiments show that our solutions can effectively address the major drawbacks of slow startup, random walk, and the last packet problem in basic BP, improving the end-to-end delay of existing low-overhead BP algorithms under various settings of network traffic, interference, and mobility.","PeriodicalId":100641,"journal":{"name":"IEEE Transactions on Machine Learning in Communications and Networking","volume":"2 ","pages":"1424-1439"},"PeriodicalIF":0.0,"publicationDate":"2024-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10681132","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142368430","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Anticipating Optical Availability in Hybrid RF/FSO Links Using RF Beacons and Deep Learning 利用射频信标和深度学习预测射频/FSO 混合链路中的光学可用性
Pub Date : 2024-09-10 DOI: 10.1109/TMLCN.2024.3457490
Mostafa Ibrahim;Arsalan Ahmad;Sabit Ekin;Peter LoPresti;Serhat Altunc;Obadiah Kegege;John F. O'Hara
Radiofrequency (RF) communications offer reliable but low data rates and energy-inefficient satellite links, while free-space optical (FSO) promises high bandwidth but struggles with disturbances imposed by atmospheric effects. A hybrid RF/FSO architecture aims to achieve optimal reliability along with high data rates for space communications. Accurate prediction of dynamic ground-to-satellite FSO link availability is critical for routing decisions in low-earth orbit constellations. In this paper, we propose a system leveraging ubiquitous RF links to proactively forecast FSO link degradation prior to signal drops below threshold levels. This enables pre-calculation of rerouting to maximally maintain high data rate FSO links throughout the duration of weather effects. We implement a supervised learning model to anticipate FSO attenuation based on the analysis of RF patterns. Through the simulation of a dense lower earth orbit (LEO) satellite constellation, we demonstrate the efficacy of our approach in a simulated satellite network, highlighting the balance between predictive accuracy and prediction duration. An emulated cloud attenuation model is proposed to provide insight into the temporal profiles of RF signals and their correlation to FSO channel dynamics. Our investigation sheds light on the trade-offs between prediction horizon and accuracy arising from RF beacon numbers and proximity.
射频(RF)通信提供可靠但低数据传输率和低能效的卫星链路,而自由空间光学(FSO)承诺提供高带宽,但在大气效应的干扰下举步维艰。射频/FSO 混合架构旨在实现空间通信的最佳可靠性和高数据传输率。准确预测地面到卫星 FSO 链路的动态可用性对于低地轨道星座的路由决策至关重要。在本文中,我们提出了一个系统,利用无处不在的射频链路,在信号降到阈值水平以下之前主动预测 FSO 链路的衰减。这样就能预先计算重新路由,在整个天气影响期间最大限度地保持高数据速率 FSO 链路。我们实施了一个监督学习模型,根据对射频模式的分析来预测 FSO 衰减。通过模拟密集的低地球轨道 (LEO) 卫星群,我们展示了我们的方法在模拟卫星网络中的功效,强调了预测准确性和预测持续时间之间的平衡。我们提出了一个模拟云衰减模型,以便深入了解射频信号的时间轮廓及其与 FSO 信道动态的相关性。我们的研究揭示了射频信标数量和邻近性在预测期限和准确性之间的权衡。
{"title":"Anticipating Optical Availability in Hybrid RF/FSO Links Using RF Beacons and Deep Learning","authors":"Mostafa Ibrahim;Arsalan Ahmad;Sabit Ekin;Peter LoPresti;Serhat Altunc;Obadiah Kegege;John F. O'Hara","doi":"10.1109/TMLCN.2024.3457490","DOIUrl":"https://doi.org/10.1109/TMLCN.2024.3457490","url":null,"abstract":"Radiofrequency (RF) communications offer reliable but low data rates and energy-inefficient satellite links, while free-space optical (FSO) promises high bandwidth but struggles with disturbances imposed by atmospheric effects. A hybrid RF/FSO architecture aims to achieve optimal reliability along with high data rates for space communications. Accurate prediction of dynamic ground-to-satellite FSO link availability is critical for routing decisions in low-earth orbit constellations. In this paper, we propose a system leveraging ubiquitous RF links to proactively forecast FSO link degradation prior to signal drops below threshold levels. This enables pre-calculation of rerouting to maximally maintain high data rate FSO links throughout the duration of weather effects. We implement a supervised learning model to anticipate FSO attenuation based on the analysis of RF patterns. Through the simulation of a dense lower earth orbit (LEO) satellite constellation, we demonstrate the efficacy of our approach in a simulated satellite network, highlighting the balance between predictive accuracy and prediction duration. An emulated cloud attenuation model is proposed to provide insight into the temporal profiles of RF signals and their correlation to FSO channel dynamics. Our investigation sheds light on the trade-offs between prediction horizon and accuracy arising from RF beacon numbers and proximity.","PeriodicalId":100641,"journal":{"name":"IEEE Transactions on Machine Learning in Communications and Networking","volume":"2 ","pages":"1369-1388"},"PeriodicalIF":0.0,"publicationDate":"2024-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10672517","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142246523","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
RSMA-Enabled Interference Management for Industrial Internet of Things Networks With Finite Blocklength Coding and Hardware Impairments 利用有限块长编码和硬件损伤为工业物联网网络提供 RSMA 支持的干扰管理
Pub Date : 2024-09-05 DOI: 10.1109/TMLCN.2024.3455268
Nahed Belhadj Mohamed;Md. Zoheb Hassan;Georges Kaddoum
The increasing proliferation of industrial internet of things (IIoT) devices requires the development of efficient radio resource allocation techniques to optimize spectrum utilization. In densely populated IIoT networks, the interference that results from simultaneously scheduling multiple IIoT devices over the same radio resource blocks (RRBs) severely degrades a network’s achievable capacity. This paper investigates an interference management problem for IIoT networks that considers both finite blocklength (FBL)-coded transmission and signal distortions induced by hardware impairments (HWIs) arising from practical, low-complexity radio-frequency front ends. We use the rate-splitting multiple access (RSMA) scheme to effectively schedule multiple IIoT devices in a cluster over the same RRB(s). To enhance the system’s achievable capacity, a joint clustering and transmit power allocation (PA) problem is formulated. To tackle the optimization problem’s inherent computational intractability due to its non-convex structure, a two-step distributed clustering and power management (DCPM) framework is proposed. First, the DCPM framework obtains a set of clustered devices for each access point by employing a greedy clustering algorithm while maximizing the clustered devices’ signal-to-interference-plus-noise ratio. Then, the DCPM framework employs a multi-agent deep reinforcement learning (DRL) framework to optimize transmit PA among the clustered devices. The proposed DRL algorithm learns a suitable transmit PA policy that does not require precise information about instantaneous signal distortions. Our simulation results demonstrate that our proposed DCPM framework adapts seamlessly to varying channel conditions and outperforms several benchmark schemes with and without HWI-induced signal distortions.
随着工业物联网(IIoT)设备的日益增多,需要开发高效的无线电资源分配技术来优化频谱利用率。在人口稠密的物联网网络中,将多个物联网设备同时调度到相同的无线电资源块(RRB)上所产生的干扰会严重降低网络的可实现容量。本文研究了 IIoT 网络的干扰管理问题,该问题既考虑了有限块长(FBL)编码传输,也考虑了由实用的低复杂度射频前端产生的硬件损伤(HWIs)引起的信号失真。我们使用速率分割多路访问(RSMA)方案,通过相同的 RRB 有效调度集群中的多个物联网设备。为了提高系统的可实现容量,我们提出了一个联合聚类和发射功率分配(PA)问题。为了解决优化问题因其非凸性结构而造成的固有计算难点,我们提出了一个分两步进行的分布式聚类和功率管理(DCPM)框架。首先,DCPM 框架采用贪婪聚类算法为每个接入点获取一组聚类设备,同时最大化聚类设备的信号干扰加噪声比。然后,DCPM 框架采用多代理深度强化学习(DRL)框架来优化聚类设备之间的发送功率放大器。所提出的 DRL 算法可学习合适的发送功率策略,而无需精确的瞬时信号失真信息。我们的仿真结果表明,我们提出的 DCPM 框架能无缝适应不同的信道条件,在有 HWI 引起的信号失真和没有 HWI 引起的信号失真的情况下,其性能都优于几种基准方案。
{"title":"RSMA-Enabled Interference Management for Industrial Internet of Things Networks With Finite Blocklength Coding and Hardware Impairments","authors":"Nahed Belhadj Mohamed;Md. Zoheb Hassan;Georges Kaddoum","doi":"10.1109/TMLCN.2024.3455268","DOIUrl":"https://doi.org/10.1109/TMLCN.2024.3455268","url":null,"abstract":"The increasing proliferation of industrial internet of things (IIoT) devices requires the development of efficient radio resource allocation techniques to optimize spectrum utilization. In densely populated IIoT networks, the interference that results from simultaneously scheduling multiple IIoT devices over the same radio resource blocks (RRBs) severely degrades a network’s achievable capacity. This paper investigates an interference management problem for IIoT networks that considers both finite blocklength (FBL)-coded transmission and signal distortions induced by hardware impairments (HWIs) arising from practical, low-complexity radio-frequency front ends. We use the rate-splitting multiple access (RSMA) scheme to effectively schedule multiple IIoT devices in a cluster over the same RRB(s). To enhance the system’s achievable capacity, a joint clustering and transmit power allocation (PA) problem is formulated. To tackle the optimization problem’s inherent computational intractability due to its non-convex structure, a two-step distributed clustering and power management (DCPM) framework is proposed. First, the DCPM framework obtains a set of clustered devices for each access point by employing a greedy clustering algorithm while maximizing the clustered devices’ signal-to-interference-plus-noise ratio. Then, the DCPM framework employs a multi-agent deep reinforcement learning (DRL) framework to optimize transmit PA among the clustered devices. The proposed DRL algorithm learns a suitable transmit PA policy that does not require precise information about instantaneous signal distortions. Our simulation results demonstrate that our proposed DCPM framework adapts seamlessly to varying channel conditions and outperforms several benchmark schemes with and without HWI-induced signal distortions.","PeriodicalId":100641,"journal":{"name":"IEEE Transactions on Machine Learning in Communications and Networking","volume":"2 ","pages":"1319-1340"},"PeriodicalIF":0.0,"publicationDate":"2024-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10666756","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142246544","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Channel Path Loss Prediction Using Satellite Images: A Deep Learning Approach 利用卫星图像预测信道路径损耗:深度学习方法
Pub Date : 2024-09-03 DOI: 10.1109/TMLCN.2024.3454019
Chenlong Wang;Bo Ai;Ruisi He;Mi Yang;Shun Zhou;Long Yu;Yuxin Zhang;Zhicheng Qiu;Zhangdui Zhong;Jianhua Fan
With the advancement of communication technology, there is a higher demand for high-precision and high-generalization channel path loss models as it is fundamental to communication systems. For traditional stochastic and deterministic models, it is difficult to strike a balance between prediction accuracy and generalizability. This paper proposes a novel deep learning-based path loss prediction model using satellite images. In order to efficiently extract environment features from satellite images, residual structure, attention mechanism, and spatial pyramid pooling layer are developed in the network based on expert knowledge. Using a convolutional network activation visualization method, the interpretability of the proposed model is improved. Finally, the proposed model achieves a prediction accuracy with a root mean square error of 5.05 dB, demonstrating an improvement of 3.07 dB over a reference empirical propagation model.
随着通信技术的发展,人们对高精度和高泛化的信道路径损耗模型提出了更高的要求,因为它是通信系统的基础。对于传统的随机和确定性模型,很难在预测精度和泛化能力之间取得平衡。本文利用卫星图像提出了一种基于深度学习的新型路径损耗预测模型。为了有效地从卫星图像中提取环境特征,基于专家知识在网络中开发了残差结构、注意机制和空间金字塔池化层。利用卷积网络激活可视化方法,提高了所提模型的可解释性。最后,提出的模型达到了预测精度,均方根误差为 5.05 dB,比参考经验传播模型提高了 3.07 dB。
{"title":"Channel Path Loss Prediction Using Satellite Images: A Deep Learning Approach","authors":"Chenlong Wang;Bo Ai;Ruisi He;Mi Yang;Shun Zhou;Long Yu;Yuxin Zhang;Zhicheng Qiu;Zhangdui Zhong;Jianhua Fan","doi":"10.1109/TMLCN.2024.3454019","DOIUrl":"https://doi.org/10.1109/TMLCN.2024.3454019","url":null,"abstract":"With the advancement of communication technology, there is a higher demand for high-precision and high-generalization channel path loss models as it is fundamental to communication systems. For traditional stochastic and deterministic models, it is difficult to strike a balance between prediction accuracy and generalizability. This paper proposes a novel deep learning-based path loss prediction model using satellite images. In order to efficiently extract environment features from satellite images, residual structure, attention mechanism, and spatial pyramid pooling layer are developed in the network based on expert knowledge. Using a convolutional network activation visualization method, the interpretability of the proposed model is improved. Finally, the proposed model achieves a prediction accuracy with a root mean square error of 5.05 dB, demonstrating an improvement of 3.07 dB over a reference empirical propagation model.","PeriodicalId":100641,"journal":{"name":"IEEE Transactions on Machine Learning in Communications and Networking","volume":"2 ","pages":"1357-1368"},"PeriodicalIF":0.0,"publicationDate":"2024-09-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10663692","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142246437","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Energy Minimization for Federated Learning Based Radio Map Construction 基于联合学习的无线电地图构建的能量最小化
Pub Date : 2024-09-02 DOI: 10.1109/TMLCN.2024.3453212
Fahui Wu;Yunfei Gao;Lin Xiao;Dingcheng Yang;Jiangbin Lyu
This paper studies an unmanned aerial vehicle (UAV)-enabled communication network, in which the UAV acts as an air relay serving multiple ground users (GUs) to jointly construct an accurate radio map or channel knowledge maps (CKM) through a federated learning (FL) algorithm. Radio map or CKM is a site-specific database that contains detailed channel-related information for specific locations. This information includes channel power gains, shadowing, interference, and angles of arrival (AoA) and departure (AoD), all of which are crucial for enabling environment-aware wireless communications. Because the wireless communication network has limited resource blocks (RBs), only a subset of users can be selected to transmit the model parameters at each iteration. Since the FL training process requires multiple transmission model parameters, the energy limitation of the wireless device will seriously affect the quality of the FL result. In this sense, the energy consumption and resource allocation have a significance to the final FL training result. We formulate an optimization problem by jointly considering user selection, wireless resource allocation, and UAV deployment, with the goal of minimizing the computation energy and wireless transmission energy. To solve the problem, we first propose a probabilistic user selection mechanism to reduce the total number of FL iterations, whereby the users who have a larger impact on the global model in each iteration are more likely to be selected. Then the convex optimization technique is utilized to optimize bandwidth allocation. Furthermore, to further save communication transmission energy, we use deep reinforcement learning (DRL) to optimize the deployment location of the UAV. The DRL-based method enables the UAV to learn from its interaction with the environment and ascertain the most energy-efficient deployment locations through an evaluation of energy consumption during the training process. Finally, the simulation results show that our proposed algorithm can reduce the total energy consumption by nearly 38%, compared to the standard FL algorithm.
本文研究了一种支持无人机(UAV)的通信网络,其中无人机充当空中中继器,为多个地面用户(GU)提供服务,通过联合学习(FL)算法共同构建精确的无线电地图或信道知识地图(CKM)。无线电地图或信道知识地图是一个特定地点的数据库,其中包含特定地点的详细信道相关信息。这些信息包括信道功率增益、阴影、干扰、到达角(AoA)和离开角(AoD),所有这些对于实现环境感知无线通信都至关重要。由于无线通信网络的资源块(RB)有限,因此每次迭代只能选择一个用户子集来传输模型参数。由于 FL 训练过程需要多次传输模型参数,无线设备的能量限制将严重影响 FL 结果的质量。从这个意义上说,能量消耗和资源分配对最终的 FL 训练结果具有重要意义。我们将用户选择、无线资源分配和无人机部署联合考虑,提出了一个优化问题,目标是使计算能量和无线传输能量最小。为了解决这个问题,我们首先提出了一种概率用户选择机制来减少 FL 的总迭代次数,即在每次迭代中对全局模型影响较大的用户更有可能被选中。然后利用凸优化技术优化带宽分配。此外,为了进一步节省通信传输能量,我们使用深度强化学习(DRL)来优化无人机的部署位置。基于 DRL 的方法使无人机能够从与环境的交互中学习,并通过评估训练过程中的能耗来确定最节能的部署位置。最后,仿真结果表明,与标准 FL 算法相比,我们提出的算法可将总能耗降低近 38%。
{"title":"Energy Minimization for Federated Learning Based Radio Map Construction","authors":"Fahui Wu;Yunfei Gao;Lin Xiao;Dingcheng Yang;Jiangbin Lyu","doi":"10.1109/TMLCN.2024.3453212","DOIUrl":"https://doi.org/10.1109/TMLCN.2024.3453212","url":null,"abstract":"This paper studies an unmanned aerial vehicle (UAV)-enabled communication network, in which the UAV acts as an air relay serving multiple ground users (GUs) to jointly construct an accurate radio map or channel knowledge maps (CKM) through a federated learning (FL) algorithm. Radio map or CKM is a site-specific database that contains detailed channel-related information for specific locations. This information includes channel power gains, shadowing, interference, and angles of arrival (AoA) and departure (AoD), all of which are crucial for enabling environment-aware wireless communications. Because the wireless communication network has limited resource blocks (RBs), only a subset of users can be selected to transmit the model parameters at each iteration. Since the FL training process requires multiple transmission model parameters, the energy limitation of the wireless device will seriously affect the quality of the FL result. In this sense, the energy consumption and resource allocation have a significance to the final FL training result. We formulate an optimization problem by jointly considering user selection, wireless resource allocation, and UAV deployment, with the goal of minimizing the computation energy and wireless transmission energy. To solve the problem, we first propose a probabilistic user selection mechanism to reduce the total number of FL iterations, whereby the users who have a larger impact on the global model in each iteration are more likely to be selected. Then the convex optimization technique is utilized to optimize bandwidth allocation. Furthermore, to further save communication transmission energy, we use deep reinforcement learning (DRL) to optimize the deployment location of the UAV. The DRL-based method enables the UAV to learn from its interaction with the environment and ascertain the most energy-efficient deployment locations through an evaluation of energy consumption during the training process. Finally, the simulation results show that our proposed algorithm can reduce the total energy consumption by nearly 38%, compared to the standard FL algorithm.","PeriodicalId":100641,"journal":{"name":"IEEE Transactions on Machine Learning in Communications and Networking","volume":"2 ","pages":"1248-1264"},"PeriodicalIF":0.0,"publicationDate":"2024-09-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10662910","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142169609","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Node Cardinality Estimation in the Internet of Things Using Privileged Feature Distillation 利用特权特征蒸馏法估算物联网中的节点卡定性
Pub Date : 2024-08-29 DOI: 10.1109/TMLCN.2024.3452057
Pranav S. Page;Anand S. Siyote;Vivek S. Borkar;Gaurav S. Kasbekar
The Internet of Things (IoT) is emerging as a critical technology to connect resource-constrained devices such as sensors and actuators as well as appliances to the Internet. In this paper, a novel methodology for node cardinality estimation in wireless networks such as the IoT and Radio-Frequency Identification (RFID) systems is proposed, which uses the Privileged Feature Distillation (PFD) technique and works using a neural network with a teacher-student model. This paper is the first to use the powerful PFD technique for node cardinality estimation in wireless networks. The teacher is trained using both privileged and regular features, and the student is trained with predictions from the teacher and regular features. Node cardinality estimation algorithms based on the PFD technique are proposed for homogeneous wireless networks as well as heterogeneous wireless networks with $T geq 2$ types of nodes. Extensive simulations, using a synthetic dataset as well as a real dataset, are used to show that the proposed PFD based algorithms for homogeneous as well as heterogeneous networks achieve much lower mean squared errors (MSEs) in the computed node cardinality estimates than state-of-the-art protocols proposed in prior work. In particular, our simulation results for the real dataset show that our proposed PFD based technique for homogeneous (respectively, heterogeneous) networks achieves a MSE that is 92.35% (respectively, 94.08%) lower on average than that achieved by the Simple RFID Counting (SRCs) protocol (respectively, T-SRCs protocol) proposed in prior work while taking the same number of time slots to execute.
物联网(IoT)是一项新兴的关键技术,可将传感器、执行器和电器等资源受限的设备连接到互联网。本文提出了一种在无线网络(如物联网和射频识别(RFID)系统)中估算节点万有引力的新方法,该方法使用了特权特征蒸馏(PFD)技术,并通过师生模型神经网络进行工作。本文首次将功能强大的 PFD 技术用于无线网络中的节点万有性估计。教师利用特权特征和常规特征进行训练,学生则利用教师的预测和常规特征进行训练。基于 PFD 技术,我们提出了适用于同构无线网络和具有 $T geq 2$ 类型节点的异构无线网络的节点万有性估计算法。使用合成数据集和真实数据集进行的大量仿真表明,与先前工作中提出的最先进协议相比,针对同构和异构网络提出的基于 PFD 的算法在计算节点万有性估计值时实现了更低的均方误差 (MSE)。特别是,我们对真实数据集的仿真结果表明,我们针对同构(分别为异构)网络提出的基于 PFD 的技术所实现的 MSE 平均比先前工作中提出的简单 RFID 计数 (SRCs) 协议(分别为 T-SRCs 协议)低 92.35%(分别为 94.08%),而执行所需的时隙数相同。
{"title":"Node Cardinality Estimation in the Internet of Things Using Privileged Feature Distillation","authors":"Pranav S. Page;Anand S. Siyote;Vivek S. Borkar;Gaurav S. Kasbekar","doi":"10.1109/TMLCN.2024.3452057","DOIUrl":"https://doi.org/10.1109/TMLCN.2024.3452057","url":null,"abstract":"The Internet of Things (IoT) is emerging as a critical technology to connect resource-constrained devices such as sensors and actuators as well as appliances to the Internet. In this paper, a novel methodology for node cardinality estimation in wireless networks such as the IoT and Radio-Frequency Identification (RFID) systems is proposed, which uses the Privileged Feature Distillation (PFD) technique and works using a neural network with a teacher-student model. This paper is the first to use the powerful PFD technique for node cardinality estimation in wireless networks. The teacher is trained using both privileged and regular features, and the student is trained with predictions from the teacher and regular features. Node cardinality estimation algorithms based on the PFD technique are proposed for homogeneous wireless networks as well as heterogeneous wireless networks with \u0000<inline-formula> <tex-math>$T geq 2$ </tex-math></inline-formula>\u0000 types of nodes. Extensive simulations, using a synthetic dataset as well as a real dataset, are used to show that the proposed PFD based algorithms for homogeneous as well as heterogeneous networks achieve much lower mean squared errors (MSEs) in the computed node cardinality estimates than state-of-the-art protocols proposed in prior work. In particular, our simulation results for the real dataset show that our proposed PFD based technique for homogeneous (respectively, heterogeneous) networks achieves a MSE that is 92.35% (respectively, 94.08%) lower on average than that achieved by the Simple RFID Counting (SRCs) protocol (respectively, T-SRCs protocol) proposed in prior work while taking the same number of time slots to execute.","PeriodicalId":100641,"journal":{"name":"IEEE Transactions on Machine Learning in Communications and Networking","volume":"2 ","pages":"1229-1247"},"PeriodicalIF":0.0,"publicationDate":"2024-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10659215","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142152132","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Agent Selection Framework for Federated Learning in Resource-Constrained Wireless Networks 资源受限无线网络中联合学习的代理选择框架
Pub Date : 2024-08-28 DOI: 10.1109/TMLCN.2024.3450829
Maria Raftopoulou;José Mairton B. da Silva;Remco Litjens;H. Vincent Poor;Piet van Mieghem
Federated learning is an effective method to train a machine learning model without requiring to aggregate the potentially sensitive data of agents in a central server. However, the limited communication bandwidth, the hardware of the agents and a potential application-specific latency requirement impact how many and which agents can participate in the learning process at each communication round. In this paper, we propose a selection metric characterizing each agent’s importance with respect to both the learning process and the resource efficiency of its wireless communication channel. Leveraging this importance metric, we formulate a general agent selection optimization problem, which can be adapted to different environments with latency or resource-oriented constraints. Considering an example wireless environment with latency constraints, the agent selection problem reduces to the 0/1 Knapsack problem, which we solve with a fully polynomial approximation. We then evaluate the agent selection policy in different scenarios, using extensive simulations for an example task of object classification of European traffic signs. The results indicate that agent selection policies which consider both learning and channel aspects provide benefits in terms of the attainable global model accuracy and/or the time needed to achieve a targeted accuracy level. However, in scenarios where agents have a limited number of data samples or where the latency requirement is very stringent, a pure learning-based agent selection policy is shown to be more beneficial during the early or late stages of the learning process.
联盟学习是一种训练机器学习模型的有效方法,无需将代理的潜在敏感数据汇集到中央服务器。然而,有限的通信带宽、代理的硬件以及潜在的特定应用延迟要求,都会影响在每一轮通信中,有多少代理以及哪些代理可以参与学习过程。在本文中,我们提出了一种选择度量方法,用于描述每个代理在学习过程中的重要性及其无线通信信道的资源效率。利用这一重要性度量,我们提出了一个通用的代理选择优化问题,该问题可适用于具有延迟或资源导向限制的不同环境。考虑到具有延迟限制的无线环境示例,代理选择问题简化为 0/1 Knapsack 问题,我们用全多项式近似法解决了这个问题。然后,我们通过对欧洲交通标志的对象分类任务进行大量模拟,评估了不同场景下的代理选择策略。结果表明,同时考虑学习和通道因素的代理选择策略在可实现的全局模型准确度和/或达到目标准确度水平所需的时间方面都有优势。然而,在代理的数据样本数量有限或对延迟要求非常严格的情况下,纯粹基于学习的代理选择策略在学习过程的早期或晚期阶段更有优势。
{"title":"Agent Selection Framework for Federated Learning in Resource-Constrained Wireless Networks","authors":"Maria Raftopoulou;José Mairton B. da Silva;Remco Litjens;H. Vincent Poor;Piet van Mieghem","doi":"10.1109/TMLCN.2024.3450829","DOIUrl":"https://doi.org/10.1109/TMLCN.2024.3450829","url":null,"abstract":"Federated learning is an effective method to train a machine learning model without requiring to aggregate the potentially sensitive data of agents in a central server. However, the limited communication bandwidth, the hardware of the agents and a potential application-specific latency requirement impact how many and which agents can participate in the learning process at each communication round. In this paper, we propose a selection metric characterizing each agent’s importance with respect to both the learning process and the resource efficiency of its wireless communication channel. Leveraging this importance metric, we formulate a general agent selection optimization problem, which can be adapted to different environments with latency or resource-oriented constraints. Considering an example wireless environment with latency constraints, the agent selection problem reduces to the 0/1 Knapsack problem, which we solve with a fully polynomial approximation. We then evaluate the agent selection policy in different scenarios, using extensive simulations for an example task of object classification of European traffic signs. The results indicate that agent selection policies which consider both learning and channel aspects provide benefits in terms of the attainable global model accuracy and/or the time needed to achieve a targeted accuracy level. However, in scenarios where agents have a limited number of data samples or where the latency requirement is very stringent, a pure learning-based agent selection policy is shown to be more beneficial during the early or late stages of the learning process.","PeriodicalId":100641,"journal":{"name":"IEEE Transactions on Machine Learning in Communications and Networking","volume":"2 ","pages":"1265-1282"},"PeriodicalIF":0.0,"publicationDate":"2024-08-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10654373","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142173968","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
ML-Enabled Millimeter-Wave Software-Defined Radio With Programmable Directionality 支持 ML 的毫米波软件定义无线电,具有可编程方向性
Pub Date : 2024-08-26 DOI: 10.1109/TMLCN.2024.3449834
Marc Jean;Murat Yuksel;Xun Gong
The increasing demand for gigabit-per-second speeds and higher wireless node density is driving the need for spatial reuse and the utilization of higher frequencies above the legacy sub-6 GHz bands. Since these super-6 GHz bands experience high path loss, directional beamforming has been the main method of access to the large amount of bandwidth available at these higher frequencies. Hence, the programming of wireless beams with specific directions is emerging as a requirement for software-defined radio (SDR) platforms. To address this need, we introduce an affordable millimeter-wave (mmWave) testbed. Using a multi-threaded software architecture, the testbed allows for the convenient programming of mmWave beam directions using a high-level programming language, while also providing access to machine learning (ML) libraries as well as SDR methods traditionally deployed in Universal Software Radio Peripheral (USRP) devices. To showcase the potential of the testbed, we tackle the Angle-of-Arrival (AoA) detection problem using reinforcement learning (RL) methods on the receiver side. AoA detection and direction finding is a crucial need for the emerging use of super-6 GHz spectra. We design and implement Q-learning, Double Q-learning, and Deep Q-learning algorithms that passively inspect the Received Signal Strength (RSS) of the mmWave beam and autonomously determine the predicted AoA. The results indicate the feasibility of programming directionality of the wireless beams via ML-based methods as well as solving difficult problems pertaining to emerging directional wireless systems.
对每秒千兆位速度和更高无线节点密度的需求不断增长,推动了对空间重用和利用传统 6 GHz 以下频段以上更高频率的需求。由于这些超 6 GHz 频段的路径损耗较高,定向波束成形一直是利用这些较高频率的大量带宽的主要方法。因此,软件定义无线电(SDR)平台需要对特定方向的无线波束进行编程。为了满足这一需求,我们推出了一种经济实惠的毫米波(mmWave)测试平台。该测试平台采用多线程软件架构,可使用高级编程语言方便地对毫米波波束方向进行编程,同时还可访问机器学习(ML)库以及传统上部署在通用软件无线电外设(USRP)设备中的 SDR 方法。为了展示该测试平台的潜力,我们在接收端使用强化学习(RL)方法解决了到达角(AoA)检测问题。AoA检测和测向是超6 GHz频谱新兴应用的关键需求。我们设计并实施了 Q-learning、Double Q-learning 和 Deep Q-learning 算法,这些算法可被动检测毫米波波束的接收信号强度 (RSS),并自主确定预测的 AoA。研究结果表明,通过基于 ML 的方法对无线波束的方向性进行编程以及解决与新兴定向无线系统相关的难题是可行的。
{"title":"ML-Enabled Millimeter-Wave Software-Defined Radio With Programmable Directionality","authors":"Marc Jean;Murat Yuksel;Xun Gong","doi":"10.1109/TMLCN.2024.3449834","DOIUrl":"https://doi.org/10.1109/TMLCN.2024.3449834","url":null,"abstract":"The increasing demand for gigabit-per-second speeds and higher wireless node density is driving the need for spatial reuse and the utilization of higher frequencies above the legacy sub-6 GHz bands. Since these super-6 GHz bands experience high path loss, directional beamforming has been the main method of access to the large amount of bandwidth available at these higher frequencies. Hence, the programming of wireless beams with specific directions is emerging as a requirement for software-defined radio (SDR) platforms. To address this need, we introduce an affordable millimeter-wave (mmWave) testbed. Using a multi-threaded software architecture, the testbed allows for the convenient programming of mmWave beam directions using a high-level programming language, while also providing access to machine learning (ML) libraries as well as SDR methods traditionally deployed in Universal Software Radio Peripheral (USRP) devices. To showcase the potential of the testbed, we tackle the Angle-of-Arrival (AoA) detection problem using reinforcement learning (RL) methods on the receiver side. AoA detection and direction finding is a crucial need for the emerging use of super-6 GHz spectra. We design and implement Q-learning, Double Q-learning, and Deep Q-learning algorithms that passively inspect the Received Signal Strength (RSS) of the mmWave beam and autonomously determine the predicted AoA. The results indicate the feasibility of programming directionality of the wireless beams via ML-based methods as well as solving difficult problems pertaining to emerging directional wireless systems.","PeriodicalId":100641,"journal":{"name":"IEEE Transactions on Machine Learning in Communications and Networking","volume":"2 ","pages":"1159-1177"},"PeriodicalIF":0.0,"publicationDate":"2024-08-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10646573","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142123044","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
IEEE Transactions on Machine Learning in Communications and Networking
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1