首页 > 最新文献

IEEE Transactions on Machine Learning in Communications and Networking最新文献

英文 中文
IEEE Communications Society Board of Governors IEEE通信协会理事会
Pub Date : 2024-12-11 DOI: 10.1109/TMLCN.2024.3500756
{"title":"IEEE Communications Society Board of Governors","authors":"","doi":"10.1109/TMLCN.2024.3500756","DOIUrl":"https://doi.org/10.1109/TMLCN.2024.3500756","url":null,"abstract":"","PeriodicalId":100641,"journal":{"name":"IEEE Transactions on Machine Learning in Communications and Networking","volume":"3 ","pages":"C3-C3"},"PeriodicalIF":0.0,"publicationDate":"2024-12-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10792973","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142810507","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Deep Receiver Architectures for Robust MIMO Rate Splitting Multiple Access 鲁棒MIMO分频多址的深度接收机架构
Pub Date : 2024-12-09 DOI: 10.1109/TMLCN.2024.3513267
Dheeraj Raja Kumar;Carles Antón-Haro;Xavier Mestre
Machine Learning tools are becoming very powerful alternatives to improve the robustness of wireless communication systems. Signal processing procedures that tend to collapse in the presence of model mismatches can be effectively improved and made robust by incorporating the selective use of data-driven techniques. This paper explores the use of neural network (NN)-based receivers to improve the reception of a Rate Splitting Multiple Access (RSMA) system. The intention is to explore several alternatives to conventional successive interference cancellation (SIC) techniques, which are known to be ineffective in the presence of channel state information (CSI) and model errors. The focus is on NN-based architectures that do not need to be retrained at each channel realization. The main idea is to replace some of the basic operations in a conventional multi-antenna SIC receiver by their NN-based equivalents, following a hybrid Model/Data-driven based approach that preserves the main procedures in the model-based signal demodulation chain. Three different architectures are explored along with their performance and computational complexity, characterized under different degrees of model uncertainty, including imperfect channel state information and non-linear channels. We evaluate the performance of data-driven architectures in overloaded scenario to analyze its effectiveness against conventional benchmarks. The study dictates that a higher degree of robustness of transceiver can be achieved, provided the neural architecture is well-designed and fed with the right information.
机器学习工具正在成为提高无线通信系统健壮性的非常强大的替代方案。在模型不匹配的情况下,信号处理程序往往会崩溃,通过结合选择性使用数据驱动技术,可以有效地改进和增强信号处理程序的鲁棒性。本文探讨了使用基于神经网络(NN)的接收器来改善速率分割多址(RSMA)系统的接收。目的是探索几种替代传统连续干扰消除(SIC)技术的方法,这些技术在存在信道状态信息(CSI)和模型误差时是无效的。重点是基于神经网络的架构,不需要在每个通道实现时重新训练。其主要思想是将传统多天线SIC接收器中的一些基本操作替换为基于神经网络的等效操作,遵循基于模型/数据驱动的混合方法,保留基于模型的信号解调链中的主要程序。研究了三种不同的体系结构及其性能和计算复杂度,这些体系结构具有不同程度的模型不确定性,包括不完全信道状态信息和非线性信道。我们评估了数据驱动架构在过载场景下的性能,以分析其与传统基准测试的有效性。研究表明,如果神经网络结构设计良好,并提供正确的信息,则可以实现更高程度的收发器鲁棒性。
{"title":"Deep Receiver Architectures for Robust MIMO Rate Splitting Multiple Access","authors":"Dheeraj Raja Kumar;Carles Antón-Haro;Xavier Mestre","doi":"10.1109/TMLCN.2024.3513267","DOIUrl":"https://doi.org/10.1109/TMLCN.2024.3513267","url":null,"abstract":"Machine Learning tools are becoming very powerful alternatives to improve the robustness of wireless communication systems. Signal processing procedures that tend to collapse in the presence of model mismatches can be effectively improved and made robust by incorporating the selective use of data-driven techniques. This paper explores the use of neural network (NN)-based receivers to improve the reception of a Rate Splitting Multiple Access (RSMA) system. The intention is to explore several alternatives to conventional successive interference cancellation (SIC) techniques, which are known to be ineffective in the presence of channel state information (CSI) and model errors. The focus is on NN-based architectures that do not need to be retrained at each channel realization. The main idea is to replace some of the basic operations in a conventional multi-antenna SIC receiver by their NN-based equivalents, following a hybrid Model/Data-driven based approach that preserves the main procedures in the model-based signal demodulation chain. Three different architectures are explored along with their performance and computational complexity, characterized under different degrees of model uncertainty, including imperfect channel state information and non-linear channels. We evaluate the performance of data-driven architectures in overloaded scenario to analyze its effectiveness against conventional benchmarks. The study dictates that a higher degree of robustness of transceiver can be achieved, provided the neural architecture is well-designed and fed with the right information.","PeriodicalId":100641,"journal":{"name":"IEEE Transactions on Machine Learning in Communications and Networking","volume":"3 ","pages":"45-63"},"PeriodicalIF":0.0,"publicationDate":"2024-12-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10781451","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142844397","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Toward Understanding Federated Learning over Unreliable Networks 迈向理解不可靠网络上的联邦学习
Pub Date : 2024-12-04 DOI: 10.1109/TMLCN.2024.3511475
Chenyuan Feng;Ahmed Arafa;Zihan Chen;Mingxiong Zhao;Tony Q. S. Quek;Howard H. Yang
This paper studies the efficiency of training a statistical model among an edge server and multiple clients via Federated Learning (FL) – a machine learning method that preserves data privacy in the training process – over wireless networks. Due to unreliable wireless channels and constrained communication resources, the server can only choose a handful of clients for parameter updates during each communication round. To address this issue, analytical expressions are derived to characterize the FL convergence rate, accounting for key features from both communication and algorithmic aspects, including transmission reliability, scheduling policies, and momentum method. First, the analysis reveals that either delicately designed user scheduling policies or expanding higher bandwidth to accommodate more clients in each communication round can expedite model training in networks with reliable connections. However, these methods become ineffective when the connection is erratic. Second, it has been verified that incorporating the momentum method into the model training algorithm accelerates the rate of convergence and provides greater resilience against transmission failures. Last, extensive empirical simulations are provided to verify these theoretical discoveries and enhancements in performance.
本文研究了在无线网络上,通过联邦学习(FL)——一种在训练过程中保护数据隐私的机器学习方法——在边缘服务器和多个客户端之间训练统计模型的效率。由于无线信道不可靠和通信资源受限,服务器在每一轮通信中只能选择少数几个客户端进行参数更新。为了解决这一问题,推导了表征FL收敛速率的解析表达式,考虑了通信和算法方面的关键特征,包括传输可靠性、调度策略和动量方法。首先,分析表明,无论是精心设计用户调度策略,还是在每一轮通信中扩展更高的带宽以容纳更多的客户端,都可以加速具有可靠连接的网络中的模型训练。但是,当连接不稳定时,这些方法就失效了。其次,已经验证了将动量方法纳入模型训练算法可以加快收敛速度,并对传输故障提供更大的弹性。最后,提供了广泛的经验模拟来验证这些理论发现和性能的增强。
{"title":"Toward Understanding Federated Learning over Unreliable Networks","authors":"Chenyuan Feng;Ahmed Arafa;Zihan Chen;Mingxiong Zhao;Tony Q. S. Quek;Howard H. Yang","doi":"10.1109/TMLCN.2024.3511475","DOIUrl":"https://doi.org/10.1109/TMLCN.2024.3511475","url":null,"abstract":"This paper studies the efficiency of training a statistical model among an edge server and multiple clients via Federated Learning (FL) – a machine learning method that preserves data privacy in the training process – over wireless networks. Due to unreliable wireless channels and constrained communication resources, the server can only choose a handful of clients for parameter updates during each communication round. To address this issue, analytical expressions are derived to characterize the FL convergence rate, accounting for key features from both communication and algorithmic aspects, including transmission reliability, scheduling policies, and momentum method. First, the analysis reveals that either delicately designed user scheduling policies or expanding higher bandwidth to accommodate more clients in each communication round can expedite model training in networks with reliable connections. However, these methods become ineffective when the connection is erratic. Second, it has been verified that incorporating the momentum method into the model training algorithm accelerates the rate of convergence and provides greater resilience against transmission failures. Last, extensive empirical simulations are provided to verify these theoretical discoveries and enhancements in performance.","PeriodicalId":100641,"journal":{"name":"IEEE Transactions on Machine Learning in Communications and Networking","volume":"3 ","pages":"80-97"},"PeriodicalIF":0.0,"publicationDate":"2024-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10777576","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142880294","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A New Heterogeneous Hybrid Massive MIMO Receiver With an Intrinsic Ability of Removing Phase Ambiguity of DOA Estimation via Machine Learning 一种基于机器学习消除DOA估计相位模糊的新型异构混合海量MIMO接收机
Pub Date : 2024-11-26 DOI: 10.1109/TMLCN.2024.3506874
Feng Shu;Baihua Shi;Yiwen Chen;Jiatong Bai;Yifan Li;Tingting Liu;Zhu Han;Xiaohu You
Massive multiple input multiple output (MIMO) antenna arrays eventuate a huge amount of circuit costs and computational complexity. To satisfy the needs of high precision and low cost in future green wireless communication, the conventional hybrid analog and digital MIMO receive structure emerges a natural choice. But it exists an issue of the phase ambiguity in direction of arrival (DOA) estimation and requires at least two time-slots to complete one-time DOA measurement with the first time-slot generating the set of candidate solutions and the second one to find a true direction by received beamforming over this set, which will lead to a low time-efficiency. To address this problem,a new heterogeneous sub-connected hybrid analog and digital ( $mathrm {H}^{2}$ AD) MIMO structure is proposed with an intrinsic ability of removing phase ambiguity, and then a corresponding new framework is developed to implement a rapid high-precision DOA estimation using only single time-slot. The proposed framework consists of two steps: 1) form a set of candidate solutions using existing methods like MUSIC; 2) find the class of the true solutions and compute the class mean. To infer the set of true solutions, we propose two new clustering methods: weight global minimum distance (WGMD) and weight local minimum distance (WLMD). Next, we also enhance two classic clustering methods: accelerating local weighted k-means (ALW-K-means) and improved density. Additionally, the corresponding closed-form expression of Cramer-Rao lower bound (CRLB) is derived. Simulation results show that the proposed frameworks using the above four clustering can approach the CRLB in almost all signal to noise ratio (SNR) regions except for extremely low SNR (SNR $lt -5$ dB). Four clustering methods have an accuracy decreasing order as follows: WGMD, improved DBSCAN, ALW-K-means and WLMD.
大规模的多输入多输出(MIMO)天线阵列带来了巨大的电路成本和计算复杂度。为了满足未来绿色无线通信对高精度和低成本的要求,传统的模拟与数字混合MIMO接收结构成为必然选择。但该方法在估计到达方向时存在相位模糊的问题,并且需要至少两个时隙才能完成一次到达方向测量,其中第一个时隙产生候选解集,第二个时隙通过接收波束形成在该集上找到真实方向,这将导致时间效率较低。针对这一问题,提出了一种具有消除相位模糊能力的新型异构子连接模数混合MIMO ($ mathm {H}^{2}$ AD)结构,并开发了相应的框架,实现了单时隙快速高精度DOA估计。提出的框架包括两个步骤:1)使用现有方法(如MUSIC)形成一组候选解决方案;2)找到真解的类别并计算类别均值。为了推断真解的集合,我们提出了两种新的聚类方法:加权全局最小距离(WGMD)和加权局部最小距离(WLMD)。接下来,我们还对两种经典的聚类方法进行了改进:加速局部加权k-means (ALW-K-means)和改进密度。此外,还推导了相应的crmer - rao下界的封闭表达式。仿真结果表明,除了极低信噪比(SNR $lt -5$ dB)外,采用上述四种聚类的框架几乎可以在所有信噪比(SNR)区域接近CRLB。四种聚类方法的准确率递减顺序为:WGMD、改进DBSCAN、ALW-K-means和WLMD。
{"title":"A New Heterogeneous Hybrid Massive MIMO Receiver With an Intrinsic Ability of Removing Phase Ambiguity of DOA Estimation via Machine Learning","authors":"Feng Shu;Baihua Shi;Yiwen Chen;Jiatong Bai;Yifan Li;Tingting Liu;Zhu Han;Xiaohu You","doi":"10.1109/TMLCN.2024.3506874","DOIUrl":"https://doi.org/10.1109/TMLCN.2024.3506874","url":null,"abstract":"Massive multiple input multiple output (MIMO) antenna arrays eventuate a huge amount of circuit costs and computational complexity. To satisfy the needs of high precision and low cost in future green wireless communication, the conventional hybrid analog and digital MIMO receive structure emerges a natural choice. But it exists an issue of the phase ambiguity in direction of arrival (DOA) estimation and requires at least two time-slots to complete one-time DOA measurement with the first time-slot generating the set of candidate solutions and the second one to find a true direction by received beamforming over this set, which will lead to a low time-efficiency. To address this problem,a new heterogeneous sub-connected hybrid analog and digital (\u0000<inline-formula> <tex-math>$mathrm {H}^{2}$ </tex-math></inline-formula>\u0000AD) MIMO structure is proposed with an intrinsic ability of removing phase ambiguity, and then a corresponding new framework is developed to implement a rapid high-precision DOA estimation using only single time-slot. The proposed framework consists of two steps: 1) form a set of candidate solutions using existing methods like MUSIC; 2) find the class of the true solutions and compute the class mean. To infer the set of true solutions, we propose two new clustering methods: weight global minimum distance (WGMD) and weight local minimum distance (WLMD). Next, we also enhance two classic clustering methods: accelerating local weighted k-means (ALW-K-means) and improved density. Additionally, the corresponding closed-form expression of Cramer-Rao lower bound (CRLB) is derived. Simulation results show that the proposed frameworks using the above four clustering can approach the CRLB in almost all signal to noise ratio (SNR) regions except for extremely low SNR (SNR \u0000<inline-formula> <tex-math>$lt -5$ </tex-math></inline-formula>\u0000 dB). Four clustering methods have an accuracy decreasing order as follows: WGMD, improved DBSCAN, ALW-K-means and WLMD.","PeriodicalId":100641,"journal":{"name":"IEEE Transactions on Machine Learning in Communications and Networking","volume":"3 ","pages":"17-29"},"PeriodicalIF":0.0,"publicationDate":"2024-11-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10767772","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142844437","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Semi-Supervised Learning via Cross-Prediction-Powered Inference for Wireless Systems 基于交叉预测推理的无线系统半监督学习
Pub Date : 2024-11-20 DOI: 10.1109/TMLCN.2024.3503543
Houssem Sifaou;Osvaldo Simeone
In many wireless application scenarios, acquiring labeled data can be prohibitively costly, requiring complex optimization processes or measurement campaigns. Semi-supervised learning leverages unlabeled samples to augment the available dataset by assigning synthetic labels obtained via machine learning (ML)-based predictions. However, treating the synthetic labels as true labels may yield worse-performing models as compared to models trained using only labeled data. Inspired by the recently developed prediction-powered inference (PPI) framework, this work investigates how to leverage the synthetic labels produced by an ML model, while accounting for the inherent bias concerning true labels. To this end, we first review PPI and its recent extensions, namely tuned PPI and cross-prediction-powered inference (CPPI). Then, we introduce two novel variants of PPI. The first, referred to as tuned CPPI, provides CPPI with an additional degree of freedom in adapting to the quality of the ML-based labels. The second, meta-CPPI (MCPPI), extends tuned CPPI via the joint optimization of the ML labeling models and of the parameters of interest. Finally, we showcase two applications of PPI-based techniques in wireless systems, namely beam alignment based on channel knowledge maps in millimeter-wave systems and received signal strength information-based indoor localization. Simulation results show the advantages of PPI-based techniques over conventional approaches that rely solely on labeled data or that apply standard pseudo-labeling strategies from semi-supervised learning. Furthermore, the proposed tuned CPPI method is observed to guarantee the best performance among all benchmark schemes, especially in the regime of limited labeled data.
在许多无线应用场景中,获取标记数据的成本可能非常高,需要复杂的优化过程或测量活动。半监督学习利用未标记的样本,通过分配基于机器学习(ML)的预测获得的合成标签来增加可用数据集。然而,与仅使用标记数据训练的模型相比,将合成标签视为真实标签可能会产生性能较差的模型。受最近开发的预测驱动推理(PPI)框架的启发,这项工作研究了如何利用ML模型产生的合成标签,同时考虑到关于真实标签的固有偏差。为此,我们首先回顾了PPI及其最近的扩展,即调优PPI和交叉预测驱动推理(CPPI)。然后,我们介绍了PPI的两种新变体。第一种,称为调谐CPPI,为CPPI提供了额外的自由度,以适应基于ml的标签的质量。第二种是元CPPI (MCPPI),它通过ML标记模型和感兴趣参数的联合优化扩展了调整后的CPPI。最后,我们展示了基于ppi技术在无线系统中的两种应用,即毫米波系统中基于信道知识图的波束对准和基于接收信号强度信息的室内定位。仿真结果表明,基于ppi的技术优于仅依赖于标记数据或应用半监督学习的标准伪标记策略的传统方法。此外,所提出的调优CPPI方法在所有基准测试方案中保证了最佳性能,特别是在有限标记数据的情况下。
{"title":"Semi-Supervised Learning via Cross-Prediction-Powered Inference for Wireless Systems","authors":"Houssem Sifaou;Osvaldo Simeone","doi":"10.1109/TMLCN.2024.3503543","DOIUrl":"https://doi.org/10.1109/TMLCN.2024.3503543","url":null,"abstract":"In many wireless application scenarios, acquiring labeled data can be prohibitively costly, requiring complex optimization processes or measurement campaigns. Semi-supervised learning leverages unlabeled samples to augment the available dataset by assigning synthetic labels obtained via machine learning (ML)-based predictions. However, treating the synthetic labels as true labels may yield worse-performing models as compared to models trained using only labeled data. Inspired by the recently developed prediction-powered inference (PPI) framework, this work investigates how to leverage the synthetic labels produced by an ML model, while accounting for the inherent bias concerning true labels. To this end, we first review PPI and its recent extensions, namely tuned PPI and cross-prediction-powered inference (CPPI). Then, we introduce two novel variants of PPI. The first, referred to as tuned CPPI, provides CPPI with an additional degree of freedom in adapting to the quality of the ML-based labels. The second, meta-CPPI (MCPPI), extends tuned CPPI via the joint optimization of the ML labeling models and of the parameters of interest. Finally, we showcase two applications of PPI-based techniques in wireless systems, namely beam alignment based on channel knowledge maps in millimeter-wave systems and received signal strength information-based indoor localization. Simulation results show the advantages of PPI-based techniques over conventional approaches that rely solely on labeled data or that apply standard pseudo-labeling strategies from semi-supervised learning. Furthermore, the proposed tuned CPPI method is observed to guarantee the best performance among all benchmark schemes, especially in the regime of limited labeled data.","PeriodicalId":100641,"journal":{"name":"IEEE Transactions on Machine Learning in Communications and Networking","volume":"3 ","pages":"30-44"},"PeriodicalIF":0.0,"publicationDate":"2024-11-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10758826","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142844291","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Reinforcement-Learning-Based Trajectory Design and Phase-Shift Control in UAV-Mounted-RIS Communications 基于强化学习的无人机- ris通信轨迹设计与相移控制
Pub Date : 2024-11-19 DOI: 10.1109/TMLCN.2024.3502576
Tianjiao Sun;Sixing Yin;Li Deng;F. Richard Yu
Taking advantages of both unmanned aerial vehicles (UAVs) and reconfigurable intelligent surfaces (RISs), UAV-mounted-RIS systems are expected to enhance transmission performance in complicated wireless environments. In this paper, we focus on system design for a UAV-mounted-RIS system and investigate joint optimization for the RIS’s phase shift and the UAV’s trajectory. To cope with the practical issue of inaccessible information on the user terminals’ (UTs) location and channel state, a reinforcement learning (RL)-based solution is proposed to find the optimal policy with finite steps of “trial-and-error”. As the action space is continuous, the deep deterministic policy gradient (DDPG) algorithm is applied to train the RL model. However, the online interaction between the agent and environment may lead to instability during the training and the assumption of (first-order) Markovian state transition could be impractical in real-world problems. Therefore, the decision transformer (DT) algorithm is employed as an alternative for RL model training to adapt to more general situations of state transition. Experimental results demonstrate that the proposed RL solutions are highly efficient in model training along with acceptable performance close to the benchmark, which relies on conventional optimization algorithms with the UT’s locations and channel parameters explicitly known beforehand.
利用无人机(uav)和可重构智能表面(RISs)的优势,无人机安装的ris系统有望提高复杂无线环境下的传输性能。本文重点研究了无人机挂载RIS系统的系统设计,并研究了RIS相移和无人机轨迹的联合优化问题。针对用户终端位置信息和通道状态信息不可访问的实际问题,提出了一种基于强化学习(RL)的方法,通过有限步的“试错”找出最优策略。由于动作空间是连续的,采用深度确定性策略梯度(deep deterministic policy gradient, DDPG)算法对RL模型进行训练。然而,智能体和环境之间的在线交互可能导致训练过程中的不稳定性,并且(一阶)马尔可夫状态转移的假设在现实问题中可能是不切实际的。因此,决策转换器(DT)算法被用作RL模型训练的替代方法,以适应更一般的状态转移情况。实验结果表明,所提出的RL解决方案在模型训练中非常高效,并且具有接近基准的可接受性能,这依赖于事先明确知道UT位置和通道参数的传统优化算法。
{"title":"Reinforcement-Learning-Based Trajectory Design and Phase-Shift Control in UAV-Mounted-RIS Communications","authors":"Tianjiao Sun;Sixing Yin;Li Deng;F. Richard Yu","doi":"10.1109/TMLCN.2024.3502576","DOIUrl":"https://doi.org/10.1109/TMLCN.2024.3502576","url":null,"abstract":"Taking advantages of both unmanned aerial vehicles (UAVs) and reconfigurable intelligent surfaces (RISs), UAV-mounted-RIS systems are expected to enhance transmission performance in complicated wireless environments. In this paper, we focus on system design for a UAV-mounted-RIS system and investigate joint optimization for the RIS’s phase shift and the UAV’s trajectory. To cope with the practical issue of inaccessible information on the user terminals’ (UTs) location and channel state, a reinforcement learning (RL)-based solution is proposed to find the optimal policy with finite steps of “trial-and-error”. As the action space is continuous, the deep deterministic policy gradient (DDPG) algorithm is applied to train the RL model. However, the online interaction between the agent and environment may lead to instability during the training and the assumption of (first-order) Markovian state transition could be impractical in real-world problems. Therefore, the decision transformer (DT) algorithm is employed as an alternative for RL model training to adapt to more general situations of state transition. Experimental results demonstrate that the proposed RL solutions are highly efficient in model training along with acceptable performance close to the benchmark, which relies on conventional optimization algorithms with the UT’s locations and channel parameters explicitly known beforehand.","PeriodicalId":100641,"journal":{"name":"IEEE Transactions on Machine Learning in Communications and Networking","volume":"3 ","pages":"163-175"},"PeriodicalIF":0.0,"publicationDate":"2024-11-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10758222","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142918339","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A2PC: Augmented Advantage Pointer-Critic Model for Low Latency on Mobile IoT With Edge Computing A2PC:基于边缘计算的移动物联网低延迟增强优势指针批判模型
Pub Date : 2024-11-18 DOI: 10.1109/TMLCN.2024.3501217
Rodrigo Carvalho;Faroq Al-Tam;Noélia Correia
As a growing trend, edge computing infrastructures are starting to be integrated with Internet of Things (IoT) systems to facilitate time-critical applications. These systems often require the processing of data with limited usefulness in time, so the edge becomes vital in the development of such reactive IoT applications with real-time requirements. Although different architectural designs will always have advantages and disadvantages, mobile gateways appear to be particularly relevant in enabling this integration with the edge, particularly in the context of wide area networks with occasional data generation. In these scenarios, mobility planning is necessary, as aspects of the technology need to be aligned with the temporal needs of an application. The nature of this planning problem makes cutting-edge deep reinforcement learning (DRL) techniques useful in solving pertinent issues, such as having to deal with multiple dimensions in the action space while aiming for optimum levels of system performance. This article presents a novel scalable DRL model that incorporates a pointer-network (Ptr-Net) and an actor-critic algorithm to handle complex action spaces. The model synchronously determines the gateway location and visit time. Ultimately, the gateways are able to attain high-quality trajectory planning with reduced latency.
作为一种日益增长的趋势,边缘计算基础设施开始与物联网(IoT)系统集成,以促进时间关键型应用。这些系统通常需要及时处理有用性有限的数据,因此在开发具有实时要求的响应式物联网应用程序时,边缘变得至关重要。尽管不同的架构设计总是有优点和缺点,但移动网关在实现与边缘的集成方面似乎特别相关,特别是在偶尔产生数据的广域网环境中。在这些场景中,移动性规划是必要的,因为技术的各个方面需要与应用程序的临时需求保持一致。这个规划问题的本质使得尖端的深度强化学习(DRL)技术在解决相关问题时非常有用,例如必须在行动空间中处理多个维度,同时以最佳系统性能为目标。本文提出了一种新颖的可扩展DRL模型,该模型结合了一个指针网络(Ptr-Net)和一个actor-critic算法来处理复杂的动作空间。该模型同步确定网关位置和访问时间。最终,网关能够在减少延迟的情况下获得高质量的轨迹规划。
{"title":"A2PC: Augmented Advantage Pointer-Critic Model for Low Latency on Mobile IoT With Edge Computing","authors":"Rodrigo Carvalho;Faroq Al-Tam;Noélia Correia","doi":"10.1109/TMLCN.2024.3501217","DOIUrl":"https://doi.org/10.1109/TMLCN.2024.3501217","url":null,"abstract":"As a growing trend, edge computing infrastructures are starting to be integrated with Internet of Things (IoT) systems to facilitate time-critical applications. These systems often require the processing of data with limited usefulness in time, so the edge becomes vital in the development of such reactive IoT applications with real-time requirements. Although different architectural designs will always have advantages and disadvantages, mobile gateways appear to be particularly relevant in enabling this integration with the edge, particularly in the context of wide area networks with occasional data generation. In these scenarios, mobility planning is necessary, as aspects of the technology need to be aligned with the temporal needs of an application. The nature of this planning problem makes cutting-edge deep reinforcement learning (DRL) techniques useful in solving pertinent issues, such as having to deal with multiple dimensions in the action space while aiming for optimum levels of system performance. This article presents a novel scalable DRL model that incorporates a pointer-network (Ptr-Net) and an actor-critic algorithm to handle complex action spaces. The model synchronously determines the gateway location and visit time. Ultimately, the gateways are able to attain high-quality trajectory planning with reduced latency.","PeriodicalId":100641,"journal":{"name":"IEEE Transactions on Machine Learning in Communications and Networking","volume":"3 ","pages":"1-16"},"PeriodicalIF":0.0,"publicationDate":"2024-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10755120","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142821217","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Optimizing Power Allocation in HAPs Assisted LEO Satellite Communications 优化辅助低地轨道卫星通信的 HAP 功率分配
Pub Date : 2024-11-04 DOI: 10.1109/TMLCN.2024.3491054
Zain Ali;Zouheir Rezki;Mohamed-Slim Alouini
The next generation of communication devices will require robust connectivity for millions of ground devices such as sensors or mobile devices in remote or disaster-stricken areas to be connected to the network. Non-terrestrial network (NTN) nodes can play a vital role in fulfilling these requirements. Specifically, low-earth orbiting (LEO) satellites have emerged as an efficient and cost-effective technique to connect devices over long distances through space. However, due to their low power and environmental limitations, LEO satellites may require assistance from aerial devices such as high-altitude platforms (HAPs) or unmanned aerial vehicles to forward their data to the ground devices. Moreover, the limited power available at the NTNs makes it crucial to utilize available resources efficiently. In this paper, we present a model in which a LEO satellite communicates with multiple ground devices with the help of HAPs that relay LEO data to the ground devices. We formulate the problem of optimizing power allocation at the LEO satellite and all the HAPs to maximize the sum-rate of the system. To take advantage of the benefits of free-space optical (FSO) communication in satellites, we consider the LEO transmitting data to the HAPs on FSO links, which are then broadcast to the connected ground devices on radio frequency channels. We transform the complex non-convex problem into a convex form and compute the Karush-Kuhn-Tucker (KKT) conditions-based solution of the problem for power allocation at the LEO satellite and HAPs. Then, to reduce computation time, we propose a soft actor-critic (SAC) reinforcement learning (RL) framework that provides the solution in significantly less time while delivering comparable performance to the KKT scheme. Our simulation results demonstrate that the proposed solutions provide excellent performance and are scalable to any number of HAPs and ground devices in the system.
下一代通信设备需要强大的连接能力,以便将偏远或受灾地区的传感器或移动设备等数以百万计的地面设备连接到网络。非地面网络(NTN)节点可在满足这些要求方面发挥重要作用。具体来说,低地轨道(LEO)卫星已成为一种高效、经济的技术,可通过空间远距离连接设备。然而,由于低功率和环境限制,低地轨道卫星可能需要高空平台(HAP)或无人飞行器等空中设备的协助,才能将数据传送到地面设备。此外,近地轨道网的功率有限,因此有效利用可用资源至关重要。在本文中,我们提出了一个低地轨道卫星与多个地面设备通信的模型,借助 HAP 将低地轨道数据转发给地面设备。我们提出的问题是优化低地轨道卫星和所有 HAP 的功率分配,使系统总速率最大化。为了利用卫星自由空间光学(FSO)通信的优势,我们考虑由低地轨道卫星通过 FSO 链路向 HAP 发送数据,然后通过无线电频率信道将数据广播给连接的地面设备。我们将复杂的非凸问题转化为凸问题,并计算出基于卡鲁什-库恩-塔克(KKT)条件的低地轨道卫星和 HAP 功率分配问题解决方案。然后,为了减少计算时间,我们提出了一种软行为批判(SAC)强化学习(RL)框架,该框架在提供与 KKT 方案性能相当的解决方案的同时,大大缩短了计算时间。我们的仿真结果表明,所提出的解决方案性能卓越,可扩展至系统中任何数量的 HAP 和地面设备。
{"title":"Optimizing Power Allocation in HAPs Assisted LEO Satellite Communications","authors":"Zain Ali;Zouheir Rezki;Mohamed-Slim Alouini","doi":"10.1109/TMLCN.2024.3491054","DOIUrl":"https://doi.org/10.1109/TMLCN.2024.3491054","url":null,"abstract":"The next generation of communication devices will require robust connectivity for millions of ground devices such as sensors or mobile devices in remote or disaster-stricken areas to be connected to the network. Non-terrestrial network (NTN) nodes can play a vital role in fulfilling these requirements. Specifically, low-earth orbiting (LEO) satellites have emerged as an efficient and cost-effective technique to connect devices over long distances through space. However, due to their low power and environmental limitations, LEO satellites may require assistance from aerial devices such as high-altitude platforms (HAPs) or unmanned aerial vehicles to forward their data to the ground devices. Moreover, the limited power available at the NTNs makes it crucial to utilize available resources efficiently. In this paper, we present a model in which a LEO satellite communicates with multiple ground devices with the help of HAPs that relay LEO data to the ground devices. We formulate the problem of optimizing power allocation at the LEO satellite and all the HAPs to maximize the sum-rate of the system. To take advantage of the benefits of free-space optical (FSO) communication in satellites, we consider the LEO transmitting data to the HAPs on FSO links, which are then broadcast to the connected ground devices on radio frequency channels. We transform the complex non-convex problem into a convex form and compute the Karush-Kuhn-Tucker (KKT) conditions-based solution of the problem for power allocation at the LEO satellite and HAPs. Then, to reduce computation time, we propose a soft actor-critic (SAC) reinforcement learning (RL) framework that provides the solution in significantly less time while delivering comparable performance to the KKT scheme. Our simulation results demonstrate that the proposed solutions provide excellent performance and are scalable to any number of HAPs and ground devices in the system.","PeriodicalId":100641,"journal":{"name":"IEEE Transactions on Machine Learning in Communications and Networking","volume":"2 ","pages":"1661-1677"},"PeriodicalIF":0.0,"publicationDate":"2024-11-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10741546","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142636509","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Attention-Aided Outdoor Localization in Commercial 5G NR Systems 商用 5G NR 系统中的注意力辅助室外定位
Pub Date : 2024-11-01 DOI: 10.1109/TMLCN.2024.3490496
Guoda Tian;Dino Pjanić;Xuesong Cai;Bo Bernhardsson;Fredrik Tufvesson
The integration of high-precision cellular localization and machine learning (ML) is considered a cornerstone technique in future cellular navigation systems, offering unparalleled accuracy and functionality. This study focuses on localization based on uplink channel measurements in a fifth-generation (5G) new radio (NR) system. An attention-aided ML-based single-snapshot localization pipeline is presented, which consists of several cascaded blocks, namely a signal processing block, an attention-aided block, and an uncertainty estimation block. Specifically, the signal processing block generates an impulse response beam matrix for all beams. The attention-aided block trains on the channel impulse responses using an attention-aided network, which captures the correlation between impulse responses for different beams. The uncertainty estimation block predicts the probability density function of the user equipment (UE) position, thereby also indicating the confidence level of the localization result. Two representative uncertainty estimation techniques, the negative log-likelihood and the regression-by-classification techniques, are applied and compared. Furthermore, for dynamic measurements with multiple snapshots available, we combine the proposed pipeline with a Kalman filter to enhance localization accuracy. To evaluate our approach, we extract channel impulse responses for different beams from a commercial base station. The outdoor measurement campaign covers Line-of-Sight (LoS), Non Line-of-Sight (NLoS), and a mix of LoS and NLoS scenarios. The results show that sub-meter localization accuracy can be achieved.
高精度蜂窝定位与机器学习(ML)的集成被认为是未来蜂窝导航系统的基石技术,可提供无与伦比的精度和功能。本研究的重点是第五代(5G)新无线电(NR)系统中基于上行链路信道测量的定位。本文介绍了一种基于注意力辅助 ML 的单快照定位流水线,它由几个级联块组成,即信号处理块、注意力辅助块和不确定性估计块。具体来说,信号处理模块为所有波束生成脉冲响应波束矩阵。注意力辅助块利用注意力辅助网络对信道脉冲响应进行训练,从而捕捉不同波束脉冲响应之间的相关性。不确定性估计模块预测用户设备(UE)位置的概率密度函数,从而显示定位结果的置信度。应用了两种具有代表性的不确定性估计技术,即负对数概率和分类回归技术,并进行了比较。此外,对于具有多个可用快照的动态测量,我们将提议的管道与卡尔曼滤波器相结合,以提高定位精度。为了评估我们的方法,我们从一个商用基站提取了不同波束的信道脉冲响应。室外测量活动涵盖了视距(LoS)、非视距(NLoS)以及 LoS 和 NLoS 场景的混合。结果表明,可以实现亚米级定位精度。
{"title":"Attention-Aided Outdoor Localization in Commercial 5G NR Systems","authors":"Guoda Tian;Dino Pjanić;Xuesong Cai;Bo Bernhardsson;Fredrik Tufvesson","doi":"10.1109/TMLCN.2024.3490496","DOIUrl":"https://doi.org/10.1109/TMLCN.2024.3490496","url":null,"abstract":"The integration of high-precision cellular localization and machine learning (ML) is considered a cornerstone technique in future cellular navigation systems, offering unparalleled accuracy and functionality. This study focuses on localization based on uplink channel measurements in a fifth-generation (5G) new radio (NR) system. An attention-aided ML-based single-snapshot localization pipeline is presented, which consists of several cascaded blocks, namely a signal processing block, an attention-aided block, and an uncertainty estimation block. Specifically, the signal processing block generates an impulse response beam matrix for all beams. The attention-aided block trains on the channel impulse responses using an attention-aided network, which captures the correlation between impulse responses for different beams. The uncertainty estimation block predicts the probability density function of the user equipment (UE) position, thereby also indicating the confidence level of the localization result. Two representative uncertainty estimation techniques, the negative log-likelihood and the regression-by-classification techniques, are applied and compared. Furthermore, for dynamic measurements with multiple snapshots available, we combine the proposed pipeline with a Kalman filter to enhance localization accuracy. To evaluate our approach, we extract channel impulse responses for different beams from a commercial base station. The outdoor measurement campaign covers Line-of-Sight (LoS), Non Line-of-Sight (NLoS), and a mix of LoS and NLoS scenarios. The results show that sub-meter localization accuracy can be achieved.","PeriodicalId":100641,"journal":{"name":"IEEE Transactions on Machine Learning in Communications and Networking","volume":"2 ","pages":"1678-1692"},"PeriodicalIF":0.0,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10741343","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142694615","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Information Bottleneck-Based Domain Adaptation for Hybrid Deep Learning in Scalable Network Slicing 可扩展网络切片中基于信息瓶颈的混合深度学习领域适应性研究
Pub Date : 2024-10-24 DOI: 10.1109/TMLCN.2024.3485520
Tianlun Hu;Qi Liao;Qiang Liu;Georg Carle
Network slicing enables operators to efficiently support diverse applications on a shared infrastructure. However, the evolving complexity of networks, compounded by inter-cell interference, necessitates agile and adaptable resource management. While deep learning offers solutions for coping with complexity, its adaptability to dynamic configurations remains limited. In this paper, we propose a novel hybrid deep learning algorithm called IDLA (integrated deep learning with the Lagrangian method). This integrated approach aims to enhance the scalability, flexibility, and robustness of slicing resource allocation solutions by harnessing the high approximation capability of deep learning and the strong generalization of classical non-linear optimization methods. Then, we introduce a variational information bottleneck (VIB)-assisted domain adaptation (DA) approach to enhance integrated deep learning and Lagrangian method (IDLA)’s adaptability across diverse network environments and conditions. We propose pre-training a variational information bottleneck (VIB)-based Quality of Service (QoS) estimator, using slice-specific inputs shared across all source domain slices. Each target domain slice can deploy this estimator to predict its QoS and optimize slice resource allocation using the IDLA algorithm. This VIB-based estimator is continuously fine-tuned with a mixture of samples from both the source and target domains until convergence. Evaluating on a multi-cell network with time-varying slice configurations, the VIB-enhanced IDLA algorithm outperforms baselines such as heuristic and deep reinforcement learning-based solutions, achieving twice the convergence speed and 16.52% higher asymptotic performance after slicing configuration changes. Transferability assessment demonstrates a 25.66% improvement in estimation accuracy with VIB, especially in scenarios with significant domain gaps, highlighting its robustness and effectiveness across diverse domains.
网络切片使运营商能够在共享基础设施上有效支持各种应用。然而,网络的复杂性不断发展,再加上小区间的干扰,因此需要灵活、适应性强的资源管理。虽然深度学习提供了应对复杂性的解决方案,但其对动态配置的适应性仍然有限。在本文中,我们提出了一种名为 IDLA(拉格朗日法集成深度学习)的新型混合深度学习算法。这种集成方法旨在利用深度学习的高逼近能力和经典非线性优化方法的强泛化能力,增强切片资源分配解决方案的可扩展性、灵活性和鲁棒性。然后,我们引入了一种变异信息瓶颈(VIB)辅助领域适应(DA)方法,以增强集成深度学习和拉格朗日方法(IDLA)在不同网络环境和条件下的适应性。我们提出了一种基于变异信息瓶颈(VIB)的服务质量(QoS)预估器,使用所有源域片共享的特定片输入进行预训练。每个目标域切片都可以使用该估计器来预测其 QoS,并使用 IDLA 算法优化切片资源分配。这种基于 VIB 的估计器通过源域和目标域的混合样本进行持续微调,直至收敛。在具有时变切片配置的多蜂窝网络上进行评估时,VIB 增强型 IDLA 算法优于启发式和基于深度强化学习的解决方案等基线算法,在切片配置发生变化后,收敛速度提高了一倍,渐进性能提高了 16.52%。可移植性评估表明,使用 VIB 后,估计准确率提高了 25.66%,尤其是在存在显著领域差距的场景中,这凸显了 VIB 在不同领域的鲁棒性和有效性。
{"title":"Information Bottleneck-Based Domain Adaptation for Hybrid Deep Learning in Scalable Network Slicing","authors":"Tianlun Hu;Qi Liao;Qiang Liu;Georg Carle","doi":"10.1109/TMLCN.2024.3485520","DOIUrl":"https://doi.org/10.1109/TMLCN.2024.3485520","url":null,"abstract":"Network slicing enables operators to efficiently support diverse applications on a shared infrastructure. However, the evolving complexity of networks, compounded by inter-cell interference, necessitates agile and adaptable resource management. While deep learning offers solutions for coping with complexity, its adaptability to dynamic configurations remains limited. In this paper, we propose a novel hybrid deep learning algorithm called IDLA (integrated deep learning with the Lagrangian method). This integrated approach aims to enhance the scalability, flexibility, and robustness of slicing resource allocation solutions by harnessing the high approximation capability of deep learning and the strong generalization of classical non-linear optimization methods. Then, we introduce a variational information bottleneck (VIB)-assisted domain adaptation (DA) approach to enhance integrated deep learning and Lagrangian method (IDLA)’s adaptability across diverse network environments and conditions. We propose pre-training a variational information bottleneck (VIB)-based Quality of Service (QoS) estimator, using slice-specific inputs shared across all source domain slices. Each target domain slice can deploy this estimator to predict its QoS and optimize slice resource allocation using the IDLA algorithm. This VIB-based estimator is continuously fine-tuned with a mixture of samples from both the source and target domains until convergence. Evaluating on a multi-cell network with time-varying slice configurations, the VIB-enhanced IDLA algorithm outperforms baselines such as heuristic and deep reinforcement learning-based solutions, achieving twice the convergence speed and 16.52% higher asymptotic performance after slicing configuration changes. Transferability assessment demonstrates a 25.66% improvement in estimation accuracy with VIB, especially in scenarios with significant domain gaps, highlighting its robustness and effectiveness across diverse domains.","PeriodicalId":100641,"journal":{"name":"IEEE Transactions on Machine Learning in Communications and Networking","volume":"2 ","pages":"1642-1660"},"PeriodicalIF":0.0,"publicationDate":"2024-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10734592","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142579172","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
IEEE Transactions on Machine Learning in Communications and Networking
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1