首页 > 最新文献

IEEE Transactions on Machine Learning in Communications and Networking最新文献

英文 中文
Conditional Denoising Diffusion Probabilistic Models for Data Reconstruction Enhancement in Wireless Communications 无线通信中增强数据重构的条件去噪扩散概率模型
Pub Date : 2024-12-25 DOI: 10.1109/TMLCN.2024.3522872
Mehdi Letafati;Samad Ali;Matti Latva-Aho
In this paper, conditional denoising diffusion probabilistic models (CDiffs) are proposed to enhance the data transmission and reconstruction over wireless channels. The underlying mechanism of diffusion models is to decompose the data generation process over the so-called “denoising” steps. Inspired by this, the key idea is to leverage the generative prior of diffusion models in learning a “noisy-to-clean” transformation of the information signal to help enhance data reconstruction. The proposed scheme could be beneficial for communication scenarios in which a prior knowledge of the information content is available, e.g., in multimedia transmission. Hence, instead of employing complicated channel codes that reduce the information rate, one can exploit diffusion priors for reliable data reconstruction, especially under extreme channel conditions due to low signal-to-noise ratio (SNR), or hardware-impaired communications. The proposed CDiff-assisted receiver is tailored for the scenario of wireless image transmission using MNIST dataset. Our numerical results highlight the reconstruction performance of our scheme compared to the conventional digital communication, as well as the deep neural network (DNN)-based benchmark. It is also shown that more than 10 dB improvement in the reconstruction could be achieved in low SNR regimes, without the need to reduce the information rate for error correction.
本文提出了条件去噪扩散概率模型(CDiffs)来增强无线信道上的数据传输和重构。扩散模型的基本机制是将数据生成过程分解为所谓的“去噪”步骤。受此启发,关键思想是利用扩散模型的生成先验来学习信息信号的“噪声到清洁”转换,以帮助增强数据重建。所提出的方案可能有利于可获得信息内容的先验知识的通信场景,例如在多媒体传输中。因此,与其使用降低信息速率的复杂信道代码,不如利用扩散先验进行可靠的数据重建,特别是在由于低信噪比(SNR)或硬件受损通信而导致的极端信道条件下。提出的cdiff辅助接收机是针对使用MNIST数据集的无线图像传输场景量身定制的。与传统的数字通信以及基于深度神经网络(DNN)的基准相比,我们的数值结果突出了我们的方案的重建性能。研究还表明,在低信噪比的情况下,不需要降低纠错的信息率,就可以实现10 dB以上的重建改进。
{"title":"Conditional Denoising Diffusion Probabilistic Models for Data Reconstruction Enhancement in Wireless Communications","authors":"Mehdi Letafati;Samad Ali;Matti Latva-Aho","doi":"10.1109/TMLCN.2024.3522872","DOIUrl":"https://doi.org/10.1109/TMLCN.2024.3522872","url":null,"abstract":"In this paper, conditional denoising diffusion probabilistic models (CDiffs) are proposed to enhance the data transmission and reconstruction over wireless channels. The underlying mechanism of diffusion models is to decompose the data generation process over the so-called “denoising” steps. Inspired by this, the key idea is to leverage the generative prior of diffusion models in learning a “noisy-to-clean” transformation of the information signal to help enhance data reconstruction. The proposed scheme could be beneficial for communication scenarios in which a prior knowledge of the information content is available, e.g., in multimedia transmission. Hence, instead of employing complicated channel codes that reduce the information rate, one can exploit diffusion priors for reliable data reconstruction, especially under extreme channel conditions due to low signal-to-noise ratio (SNR), or hardware-impaired communications. The proposed CDiff-assisted receiver is tailored for the scenario of wireless image transmission using MNIST dataset. Our numerical results highlight the reconstruction performance of our scheme compared to the conventional digital communication, as well as the deep neural network (DNN)-based benchmark. It is also shown that more than 10 dB improvement in the reconstruction could be achieved in low SNR regimes, without the need to reduce the information rate for error correction.","PeriodicalId":100641,"journal":{"name":"IEEE Transactions on Machine Learning in Communications and Networking","volume":"3 ","pages":"133-146"},"PeriodicalIF":0.0,"publicationDate":"2024-12-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10816175","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142912491","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multi-Agent Reinforcement Learning With Action Masking for UAV-Enabled Mobile Communications 基于动作掩蔽的无人机移动通信多智能体强化学习
Pub Date : 2024-12-23 DOI: 10.1109/TMLCN.2024.3521876
Danish Rizvi;David Boyle
Unmanned Aerial Vehicles (UAVs) are increasingly used as aerial base stations to provide ad hoc communications infrastructure. Building upon prior research efforts which consider either static nodes, 2D trajectories or single UAV systems, this paper focuses on the use of multiple UAVs for providing wireless communication to mobile users in the absence of terrestrial communications infrastructure. In particular, we jointly optimize UAV 3D trajectory and NOMA power allocation to maximize system throughput. Firstly, a weighted K-means-based clustering algorithm establishes UAV-user associations at regular intervals. Then the efficacy of training a novel Shared Deep Q-Network (SDQN) with action masking is explored. Unlike training each UAV separately using DQN, the SDQN reduces training time by using the experiences of multiple UAVs instead of a single agent. We also show that SDQN can be used to train a multi-agent system with differing action spaces. Simulation results confirm that: 1) training a shared DQN outperforms a conventional DQN in terms of maximum system throughput (+20%) and training time (-10%); 2) it can converge for agents with different action spaces, yielding a 9% increase in throughput compared to Mutual DQN algorithm; and 3) combining NOMA with an SDQN architecture enables the network to achieve a better sum rate compared with existing baseline schemes.
无人驾驶飞行器(uav)越来越多地被用作空中基站,以提供自组织通信基础设施。在先前考虑静态节点、二维轨迹或单个无人机系统的研究成果的基础上,本文重点研究了在没有地面通信基础设施的情况下,使用多个无人机为移动用户提供无线通信。特别是,我们共同优化了无人机的3D轨迹和NOMA功率分配,以最大限度地提高系统吞吐量。首先,基于加权k均值的聚类算法以一定的间隔建立无人机用户关联。然后探讨了带动作掩蔽的新型共享深度q网络(SDQN)的训练效果。与使用DQN单独训练每架无人机不同,SDQN通过使用多架无人机的经验而不是单个代理来减少训练时间。我们还证明了SDQN可以用于训练具有不同动作空间的多智能体系统。仿真结果证实:1)在最大系统吞吐量(+20%)和训练时间(-10%)方面,训练共享DQN优于传统DQN;2)对于具有不同动作空间的智能体可以收敛,吞吐量比Mutual DQN算法提高9%;3)与现有基准方案相比,NOMA与SDQN架构的结合使网络能够获得更好的求和速率。
{"title":"Multi-Agent Reinforcement Learning With Action Masking for UAV-Enabled Mobile Communications","authors":"Danish Rizvi;David Boyle","doi":"10.1109/TMLCN.2024.3521876","DOIUrl":"https://doi.org/10.1109/TMLCN.2024.3521876","url":null,"abstract":"Unmanned Aerial Vehicles (UAVs) are increasingly used as aerial base stations to provide ad hoc communications infrastructure. Building upon prior research efforts which consider either static nodes, 2D trajectories or single UAV systems, this paper focuses on the use of multiple UAVs for providing wireless communication to mobile users in the absence of terrestrial communications infrastructure. In particular, we jointly optimize UAV 3D trajectory and NOMA power allocation to maximize system throughput. Firstly, a weighted K-means-based clustering algorithm establishes UAV-user associations at regular intervals. Then the efficacy of training a novel Shared Deep Q-Network (SDQN) with action masking is explored. Unlike training each UAV separately using DQN, the SDQN reduces training time by using the experiences of multiple UAVs instead of a single agent. We also show that SDQN can be used to train a multi-agent system with differing action spaces. Simulation results confirm that: 1) training a shared DQN outperforms a conventional DQN in terms of maximum system throughput (+20%) and training time (-10%); 2) it can converge for agents with different action spaces, yielding a 9% increase in throughput compared to Mutual DQN algorithm; and 3) combining NOMA with an SDQN architecture enables the network to achieve a better sum rate compared with existing baseline schemes.","PeriodicalId":100641,"journal":{"name":"IEEE Transactions on Machine Learning in Communications and Networking","volume":"3 ","pages":"117-132"},"PeriodicalIF":0.0,"publicationDate":"2024-12-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10812765","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142905826","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Online Learning for Intelligent Thermal Management of Interference-Coupled and Passively Cooled Base Stations 干扰耦合被动冷却基站智能热管理在线学习
Pub Date : 2024-12-16 DOI: 10.1109/TMLCN.2024.3517619
Zhanwei Yu;Yi Zhao;Xiaoli Chu;Di Yuan
Passively cooled base stations (PCBSs) have emerged to deliver better cost and energy efficiency. However, passive cooling necessitates intelligent thermal control via traffic management, i.e., the instantaneous data traffic or throughput of a PCBS directly impacts its thermal performance. This is particularly challenging for outdoor deployment of PCBSs because the heat dissipation efficiency is uncertain and fluctuates over time. What is more, the PCBSs are interference-coupled in multi-cell scenarios. Thus, a higher-throughput PCBS leads to higher interference to the other PCBSs, which, in turn, would require more resource consumption to meet their respective throughput targets. In this paper, we address online decision-making for maximizing the total downlink throughput for a multi-PCBS system subject to constraints related on operating temperature. We demonstrate that a reinforcement learning (RL) approach, specifically soft actor-critic (SAC), can successfully perform throughput maximization while keeping the PCBSs cool, by adapting the throughput to time-varying heat dissipation conditions. Furthermore, we design a denial and reward mechanism that effectively mitigates the risk of overheating during the exploration phase of RL. Simulation results show that our approach achieves up to 88.6% of the global optimum. This is very promising, as our approach operates without prior knowledge of future heat dissipation efficiency, which is required by the global optimum.
被动冷却基站(PCBSs)已经出现,以提供更好的成本和能源效率。然而,被动冷却需要通过流量管理进行智能热控制,即pcb的瞬时数据流量或吞吐量直接影响其热性能。这对于pcb的户外部署尤其具有挑战性,因为散热效率是不确定的,并且随着时间的推移而波动。更重要的是,pcb在多单元场景中是干扰耦合的。因此,更高吞吐量的pcb会导致对其他pcb的更高干扰,这反过来又需要更多的资源消耗来满足各自的吞吐量目标。在本文中,我们讨论了在线决策,以最大限度地提高受工作温度限制的多pcb系统的总下行吞吐量。我们证明了一种强化学习(RL)方法,特别是软行为者批评(SAC),可以通过使吞吐量适应时变的散热条件,在保持pcb冷却的同时成功地实现吞吐量最大化。此外,我们设计了一个拒绝和奖励机制,有效地降低了RL探索阶段过热的风险。仿真结果表明,该方法达到了全局最优解的88.6%。这是非常有希望的,因为我们的方法在没有全局最优所要求的未来散热效率的先验知识的情况下运行。
{"title":"Online Learning for Intelligent Thermal Management of Interference-Coupled and Passively Cooled Base Stations","authors":"Zhanwei Yu;Yi Zhao;Xiaoli Chu;Di Yuan","doi":"10.1109/TMLCN.2024.3517619","DOIUrl":"https://doi.org/10.1109/TMLCN.2024.3517619","url":null,"abstract":"Passively cooled base stations (PCBSs) have emerged to deliver better cost and energy efficiency. However, passive cooling necessitates intelligent thermal control via traffic management, i.e., the instantaneous data traffic or throughput of a PCBS directly impacts its thermal performance. This is particularly challenging for outdoor deployment of PCBSs because the heat dissipation efficiency is uncertain and fluctuates over time. What is more, the PCBSs are interference-coupled in multi-cell scenarios. Thus, a higher-throughput PCBS leads to higher interference to the other PCBSs, which, in turn, would require more resource consumption to meet their respective throughput targets. In this paper, we address online decision-making for maximizing the total downlink throughput for a multi-PCBS system subject to constraints related on operating temperature. We demonstrate that a reinforcement learning (RL) approach, specifically soft actor-critic (SAC), can successfully perform throughput maximization while keeping the PCBSs cool, by adapting the throughput to time-varying heat dissipation conditions. Furthermore, we design a denial and reward mechanism that effectively mitigates the risk of overheating during the exploration phase of RL. Simulation results show that our approach achieves up to 88.6% of the global optimum. This is very promising, as our approach operates without prior knowledge of future heat dissipation efficiency, which is required by the global optimum.","PeriodicalId":100641,"journal":{"name":"IEEE Transactions on Machine Learning in Communications and Networking","volume":"3 ","pages":"64-79"},"PeriodicalIF":0.0,"publicationDate":"2024-12-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10802970","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142858862","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Robust and Lightweight Modeling of IoT Network Behaviors From Raw Traffic Packets 基于原始流量数据包的物联网网络行为鲁棒轻量级建模
Pub Date : 2024-12-16 DOI: 10.1109/TMLCN.2024.3517613
Aleksandar Pasquini;Rajesh Vasa;Irini Logothetis;Hassan Habibi Gharakheili;Alexander Chambers;Minh Tran
Machine Learning (ML)-based techniques are increasingly used for network management tasks, such as intrusion detection, application identification, or asset management. Recent studies show that neural network-based traffic analysis can achieve performance comparable to human feature-engineered ML pipelines. However, neural networks provide this performance at a higher computational cost and complexity, due to high-throughput traffic conditions necessitating specialized hardware for real-time operations. This paper presents lightweight models for encoding characteristics of Internet-of-Things (IoT) network packets; 1) we present two strategies to encode packets (regardless of their size, encryption, and protocol) to integer vectors: a shallow lightweight neural network and compression. With a public dataset containing about 8 million packets emitted by 22 IoT device types, we show the encoded packets can form complete (up to 80%) and homogeneous (up to 89%) clusters; 2) we demonstrate the efficacy of our generated encodings in the downstream classification task and quantify their computing costs. We train three multi-class models to predict the IoT class given network packets and show our models can achieve the same levels of accuracy (94%) as deep neural network embeddings but with computing costs up to 10 times lower; 3) we examine how the amount of packet data (headers and payload) can affect the prediction quality. We demonstrate how the choice of Internet Protocol (IP) payloads strikes a balance between prediction accuracy (99%) and cost. Along with the cost-efficacy of models, this capability can result in rapid and accurate predictions, meeting the requirements of network operators.
基于机器学习(ML)的技术越来越多地用于网络管理任务,如入侵检测、应用程序识别或资产管理。最近的研究表明,基于神经网络的流量分析可以达到与人类特征工程ML管道相当的性能。然而,神经网络以更高的计算成本和复杂性提供这种性能,因为高吞吐量的流量条件需要专门的硬件来进行实时操作。本文提出了物联网(IoT)网络数据包编码特征的轻量级模型;1)我们提出了两种将数据包(无论其大小,加密和协议如何)编码为整数向量的策略:浅轻量级神经网络和压缩。使用包含22种物联网设备类型发出的约800万个数据包的公共数据集,我们显示编码的数据包可以形成完整(高达80%)和均匀(高达89%)的集群;2)我们证明了我们生成的编码在下游分类任务中的有效性,并量化了它们的计算成本。我们训练了三个多类模型来预测给定网络数据包的物联网类,并表明我们的模型可以达到与深度神经网络嵌入相同的精度水平(94%),但计算成本降低了10倍;3)我们检查数据包数据(报头和有效载荷)的数量如何影响预测质量。我们演示了互联网协议(IP)有效载荷的选择如何在预测精度(99%)和成本之间取得平衡。随着模型的成本效益,这种能力可以导致快速和准确的预测,满足网络运营商的要求。
{"title":"Robust and Lightweight Modeling of IoT Network Behaviors From Raw Traffic Packets","authors":"Aleksandar Pasquini;Rajesh Vasa;Irini Logothetis;Hassan Habibi Gharakheili;Alexander Chambers;Minh Tran","doi":"10.1109/TMLCN.2024.3517613","DOIUrl":"https://doi.org/10.1109/TMLCN.2024.3517613","url":null,"abstract":"Machine Learning (ML)-based techniques are increasingly used for network management tasks, such as intrusion detection, application identification, or asset management. Recent studies show that neural network-based traffic analysis can achieve performance comparable to human feature-engineered ML pipelines. However, neural networks provide this performance at a higher computational cost and complexity, due to high-throughput traffic conditions necessitating specialized hardware for real-time operations. This paper presents lightweight models for encoding characteristics of Internet-of-Things (IoT) network packets; 1) we present two strategies to encode packets (regardless of their size, encryption, and protocol) to integer vectors: a shallow lightweight neural network and compression. With a public dataset containing about 8 million packets emitted by 22 IoT device types, we show the encoded packets can form complete (up to 80%) and homogeneous (up to 89%) clusters; 2) we demonstrate the efficacy of our generated encodings in the downstream classification task and quantify their computing costs. We train three multi-class models to predict the IoT class given network packets and show our models can achieve the same levels of accuracy (94%) as deep neural network embeddings but with computing costs up to 10 times lower; 3) we examine how the amount of packet data (headers and payload) can affect the prediction quality. We demonstrate how the choice of Internet Protocol (IP) payloads strikes a balance between prediction accuracy (99%) and cost. Along with the cost-efficacy of models, this capability can result in rapid and accurate predictions, meeting the requirements of network operators.","PeriodicalId":100641,"journal":{"name":"IEEE Transactions on Machine Learning in Communications and Networking","volume":"3 ","pages":"98-116"},"PeriodicalIF":0.0,"publicationDate":"2024-12-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10802939","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142890343","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Self-Supervised Contrastive Learning for Joint Active and Passive Beamforming in RIS-Assisted MU-MIMO Systems ris辅助MU-MIMO系统联合主动和被动波束形成的自监督对比学习
Pub Date : 2024-12-11 DOI: 10.1109/TMLCN.2024.3515913
Zhizhou He;Fabien Héliot;Yi Ma
Reconfigurable Intelligent Surfaces (RIS) can enhance system performance at the cost of increased complexity in multi-user MIMO systems. The beamforming options scale with the number of antennas at the base station/RIS. Existing methods for solving this problem tend to use computationally intensive iterative methods that are non-scalable for large RIS-aided MIMO systems. We propose here a novel self-supervised contrastive learning neural network (NN) architecture to optimize the sum spectral efficiency through joint active and passive beamforming design in multi-user RIS-aided MIMO systems. Our scheme utilizes contrastive learning to capture the channel features from augmented channel data and then can be trained to perform beamforming with only 1% of labeled data. The labels are derived through a closed-form optimization algorithm, leveraging a sequential fractional programming approach. Leveraging the proposed self-supervised design helps to greatly reduce the computational complexity during the training phase. Moreover, our proposed model can operate under various noise levels by using data augmentation methods while maintaining a robust out-of-distribution performance under various propagation environments and different signal-to-noise ratios (SNR)s. During training, our proposed network only needs 10% of labeled data to converge when compared to supervised learning. Our trained NN can then achieve performance which is only $~7%$ and $~2.5%$ away from mathematical upper bound and fully supervised learning, respectively, with far less computational complexity.
在多用户MIMO系统中,可重构智能表面(RIS)可以以增加复杂性为代价来提高系统性能。波束形成选项与基站/RIS的天线数量有关。解决这一问题的现有方法倾向于使用计算密集型的迭代方法,这些方法对于大型ris辅助MIMO系统来说是不可扩展的。本文提出了一种新的自监督对比学习神经网络(NN)架构,通过联合主动和被动波束形成设计来优化多用户ris辅助MIMO系统的总频谱效率。我们的方案利用对比学习从增强的信道数据中捕获信道特征,然后可以训练仅使用1%的标记数据执行波束形成。标签是通过一个封闭形式的优化算法派生的,利用顺序分数规划方法。利用所提出的自监督设计有助于大大降低训练阶段的计算复杂度。此外,我们提出的模型可以使用数据增强方法在各种噪声水平下运行,同时在各种传播环境和不同信噪比(SNR)下保持鲁棒的分布外性能。在训练过程中,与监督学习相比,我们提出的网络只需要10%的标记数据就可以收敛。然后,我们训练的神经网络可以分别实现距离数学上界和完全监督学习仅7%和2.5%的性能,并且计算复杂度要低得多。
{"title":"Self-Supervised Contrastive Learning for Joint Active and Passive Beamforming in RIS-Assisted MU-MIMO Systems","authors":"Zhizhou He;Fabien Héliot;Yi Ma","doi":"10.1109/TMLCN.2024.3515913","DOIUrl":"https://doi.org/10.1109/TMLCN.2024.3515913","url":null,"abstract":"Reconfigurable Intelligent Surfaces (RIS) can enhance system performance at the cost of increased complexity in multi-user MIMO systems. The beamforming options scale with the number of antennas at the base station/RIS. Existing methods for solving this problem tend to use computationally intensive iterative methods that are non-scalable for large RIS-aided MIMO systems. We propose here a novel self-supervised contrastive learning neural network (NN) architecture to optimize the sum spectral efficiency through joint active and passive beamforming design in multi-user RIS-aided MIMO systems. Our scheme utilizes contrastive learning to capture the channel features from augmented channel data and then can be trained to perform beamforming with only 1% of labeled data. The labels are derived through a closed-form optimization algorithm, leveraging a sequential fractional programming approach. Leveraging the proposed self-supervised design helps to greatly reduce the computational complexity during the training phase. Moreover, our proposed model can operate under various noise levels by using data augmentation methods while maintaining a robust out-of-distribution performance under various propagation environments and different signal-to-noise ratios (SNR)s. During training, our proposed network only needs 10% of labeled data to converge when compared to supervised learning. Our trained NN can then achieve performance which is only \u0000<inline-formula> <tex-math>$~7%$ </tex-math></inline-formula>\u0000 and \u0000<inline-formula> <tex-math>$~2.5%$ </tex-math></inline-formula>\u0000 away from mathematical upper bound and fully supervised learning, respectively, with far less computational complexity.","PeriodicalId":100641,"journal":{"name":"IEEE Transactions on Machine Learning in Communications and Networking","volume":"3 ","pages":"147-162"},"PeriodicalIF":0.0,"publicationDate":"2024-12-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10793234","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142912511","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
IEEE Communications Society Board of Governors IEEE通信协会理事会
Pub Date : 2024-12-11 DOI: 10.1109/TMLCN.2024.3500756
{"title":"IEEE Communications Society Board of Governors","authors":"","doi":"10.1109/TMLCN.2024.3500756","DOIUrl":"https://doi.org/10.1109/TMLCN.2024.3500756","url":null,"abstract":"","PeriodicalId":100641,"journal":{"name":"IEEE Transactions on Machine Learning in Communications and Networking","volume":"3 ","pages":"C3-C3"},"PeriodicalIF":0.0,"publicationDate":"2024-12-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10792973","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142810507","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Deep Receiver Architectures for Robust MIMO Rate Splitting Multiple Access 鲁棒MIMO分频多址的深度接收机架构
Pub Date : 2024-12-09 DOI: 10.1109/TMLCN.2024.3513267
Dheeraj Raja Kumar;Carles Antón-Haro;Xavier Mestre
Machine Learning tools are becoming very powerful alternatives to improve the robustness of wireless communication systems. Signal processing procedures that tend to collapse in the presence of model mismatches can be effectively improved and made robust by incorporating the selective use of data-driven techniques. This paper explores the use of neural network (NN)-based receivers to improve the reception of a Rate Splitting Multiple Access (RSMA) system. The intention is to explore several alternatives to conventional successive interference cancellation (SIC) techniques, which are known to be ineffective in the presence of channel state information (CSI) and model errors. The focus is on NN-based architectures that do not need to be retrained at each channel realization. The main idea is to replace some of the basic operations in a conventional multi-antenna SIC receiver by their NN-based equivalents, following a hybrid Model/Data-driven based approach that preserves the main procedures in the model-based signal demodulation chain. Three different architectures are explored along with their performance and computational complexity, characterized under different degrees of model uncertainty, including imperfect channel state information and non-linear channels. We evaluate the performance of data-driven architectures in overloaded scenario to analyze its effectiveness against conventional benchmarks. The study dictates that a higher degree of robustness of transceiver can be achieved, provided the neural architecture is well-designed and fed with the right information.
机器学习工具正在成为提高无线通信系统健壮性的非常强大的替代方案。在模型不匹配的情况下,信号处理程序往往会崩溃,通过结合选择性使用数据驱动技术,可以有效地改进和增强信号处理程序的鲁棒性。本文探讨了使用基于神经网络(NN)的接收器来改善速率分割多址(RSMA)系统的接收。目的是探索几种替代传统连续干扰消除(SIC)技术的方法,这些技术在存在信道状态信息(CSI)和模型误差时是无效的。重点是基于神经网络的架构,不需要在每个通道实现时重新训练。其主要思想是将传统多天线SIC接收器中的一些基本操作替换为基于神经网络的等效操作,遵循基于模型/数据驱动的混合方法,保留基于模型的信号解调链中的主要程序。研究了三种不同的体系结构及其性能和计算复杂度,这些体系结构具有不同程度的模型不确定性,包括不完全信道状态信息和非线性信道。我们评估了数据驱动架构在过载场景下的性能,以分析其与传统基准测试的有效性。研究表明,如果神经网络结构设计良好,并提供正确的信息,则可以实现更高程度的收发器鲁棒性。
{"title":"Deep Receiver Architectures for Robust MIMO Rate Splitting Multiple Access","authors":"Dheeraj Raja Kumar;Carles Antón-Haro;Xavier Mestre","doi":"10.1109/TMLCN.2024.3513267","DOIUrl":"https://doi.org/10.1109/TMLCN.2024.3513267","url":null,"abstract":"Machine Learning tools are becoming very powerful alternatives to improve the robustness of wireless communication systems. Signal processing procedures that tend to collapse in the presence of model mismatches can be effectively improved and made robust by incorporating the selective use of data-driven techniques. This paper explores the use of neural network (NN)-based receivers to improve the reception of a Rate Splitting Multiple Access (RSMA) system. The intention is to explore several alternatives to conventional successive interference cancellation (SIC) techniques, which are known to be ineffective in the presence of channel state information (CSI) and model errors. The focus is on NN-based architectures that do not need to be retrained at each channel realization. The main idea is to replace some of the basic operations in a conventional multi-antenna SIC receiver by their NN-based equivalents, following a hybrid Model/Data-driven based approach that preserves the main procedures in the model-based signal demodulation chain. Three different architectures are explored along with their performance and computational complexity, characterized under different degrees of model uncertainty, including imperfect channel state information and non-linear channels. We evaluate the performance of data-driven architectures in overloaded scenario to analyze its effectiveness against conventional benchmarks. The study dictates that a higher degree of robustness of transceiver can be achieved, provided the neural architecture is well-designed and fed with the right information.","PeriodicalId":100641,"journal":{"name":"IEEE Transactions on Machine Learning in Communications and Networking","volume":"3 ","pages":"45-63"},"PeriodicalIF":0.0,"publicationDate":"2024-12-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10781451","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142844397","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Toward Understanding Federated Learning over Unreliable Networks 迈向理解不可靠网络上的联邦学习
Pub Date : 2024-12-04 DOI: 10.1109/TMLCN.2024.3511475
Chenyuan Feng;Ahmed Arafa;Zihan Chen;Mingxiong Zhao;Tony Q. S. Quek;Howard H. Yang
This paper studies the efficiency of training a statistical model among an edge server and multiple clients via Federated Learning (FL) – a machine learning method that preserves data privacy in the training process – over wireless networks. Due to unreliable wireless channels and constrained communication resources, the server can only choose a handful of clients for parameter updates during each communication round. To address this issue, analytical expressions are derived to characterize the FL convergence rate, accounting for key features from both communication and algorithmic aspects, including transmission reliability, scheduling policies, and momentum method. First, the analysis reveals that either delicately designed user scheduling policies or expanding higher bandwidth to accommodate more clients in each communication round can expedite model training in networks with reliable connections. However, these methods become ineffective when the connection is erratic. Second, it has been verified that incorporating the momentum method into the model training algorithm accelerates the rate of convergence and provides greater resilience against transmission failures. Last, extensive empirical simulations are provided to verify these theoretical discoveries and enhancements in performance.
本文研究了在无线网络上,通过联邦学习(FL)——一种在训练过程中保护数据隐私的机器学习方法——在边缘服务器和多个客户端之间训练统计模型的效率。由于无线信道不可靠和通信资源受限,服务器在每一轮通信中只能选择少数几个客户端进行参数更新。为了解决这一问题,推导了表征FL收敛速率的解析表达式,考虑了通信和算法方面的关键特征,包括传输可靠性、调度策略和动量方法。首先,分析表明,无论是精心设计用户调度策略,还是在每一轮通信中扩展更高的带宽以容纳更多的客户端,都可以加速具有可靠连接的网络中的模型训练。但是,当连接不稳定时,这些方法就失效了。其次,已经验证了将动量方法纳入模型训练算法可以加快收敛速度,并对传输故障提供更大的弹性。最后,提供了广泛的经验模拟来验证这些理论发现和性能的增强。
{"title":"Toward Understanding Federated Learning over Unreliable Networks","authors":"Chenyuan Feng;Ahmed Arafa;Zihan Chen;Mingxiong Zhao;Tony Q. S. Quek;Howard H. Yang","doi":"10.1109/TMLCN.2024.3511475","DOIUrl":"https://doi.org/10.1109/TMLCN.2024.3511475","url":null,"abstract":"This paper studies the efficiency of training a statistical model among an edge server and multiple clients via Federated Learning (FL) – a machine learning method that preserves data privacy in the training process – over wireless networks. Due to unreliable wireless channels and constrained communication resources, the server can only choose a handful of clients for parameter updates during each communication round. To address this issue, analytical expressions are derived to characterize the FL convergence rate, accounting for key features from both communication and algorithmic aspects, including transmission reliability, scheduling policies, and momentum method. First, the analysis reveals that either delicately designed user scheduling policies or expanding higher bandwidth to accommodate more clients in each communication round can expedite model training in networks with reliable connections. However, these methods become ineffective when the connection is erratic. Second, it has been verified that incorporating the momentum method into the model training algorithm accelerates the rate of convergence and provides greater resilience against transmission failures. Last, extensive empirical simulations are provided to verify these theoretical discoveries and enhancements in performance.","PeriodicalId":100641,"journal":{"name":"IEEE Transactions on Machine Learning in Communications and Networking","volume":"3 ","pages":"80-97"},"PeriodicalIF":0.0,"publicationDate":"2024-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10777576","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142880294","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A New Heterogeneous Hybrid Massive MIMO Receiver With an Intrinsic Ability of Removing Phase Ambiguity of DOA Estimation via Machine Learning 一种基于机器学习消除DOA估计相位模糊的新型异构混合海量MIMO接收机
Pub Date : 2024-11-26 DOI: 10.1109/TMLCN.2024.3506874
Feng Shu;Baihua Shi;Yiwen Chen;Jiatong Bai;Yifan Li;Tingting Liu;Zhu Han;Xiaohu You
Massive multiple input multiple output (MIMO) antenna arrays eventuate a huge amount of circuit costs and computational complexity. To satisfy the needs of high precision and low cost in future green wireless communication, the conventional hybrid analog and digital MIMO receive structure emerges a natural choice. But it exists an issue of the phase ambiguity in direction of arrival (DOA) estimation and requires at least two time-slots to complete one-time DOA measurement with the first time-slot generating the set of candidate solutions and the second one to find a true direction by received beamforming over this set, which will lead to a low time-efficiency. To address this problem,a new heterogeneous sub-connected hybrid analog and digital ( $mathrm {H}^{2}$ AD) MIMO structure is proposed with an intrinsic ability of removing phase ambiguity, and then a corresponding new framework is developed to implement a rapid high-precision DOA estimation using only single time-slot. The proposed framework consists of two steps: 1) form a set of candidate solutions using existing methods like MUSIC; 2) find the class of the true solutions and compute the class mean. To infer the set of true solutions, we propose two new clustering methods: weight global minimum distance (WGMD) and weight local minimum distance (WLMD). Next, we also enhance two classic clustering methods: accelerating local weighted k-means (ALW-K-means) and improved density. Additionally, the corresponding closed-form expression of Cramer-Rao lower bound (CRLB) is derived. Simulation results show that the proposed frameworks using the above four clustering can approach the CRLB in almost all signal to noise ratio (SNR) regions except for extremely low SNR (SNR $lt -5$ dB). Four clustering methods have an accuracy decreasing order as follows: WGMD, improved DBSCAN, ALW-K-means and WLMD.
大规模的多输入多输出(MIMO)天线阵列带来了巨大的电路成本和计算复杂度。为了满足未来绿色无线通信对高精度和低成本的要求,传统的模拟与数字混合MIMO接收结构成为必然选择。但该方法在估计到达方向时存在相位模糊的问题,并且需要至少两个时隙才能完成一次到达方向测量,其中第一个时隙产生候选解集,第二个时隙通过接收波束形成在该集上找到真实方向,这将导致时间效率较低。针对这一问题,提出了一种具有消除相位模糊能力的新型异构子连接模数混合MIMO ($ mathm {H}^{2}$ AD)结构,并开发了相应的框架,实现了单时隙快速高精度DOA估计。提出的框架包括两个步骤:1)使用现有方法(如MUSIC)形成一组候选解决方案;2)找到真解的类别并计算类别均值。为了推断真解的集合,我们提出了两种新的聚类方法:加权全局最小距离(WGMD)和加权局部最小距离(WLMD)。接下来,我们还对两种经典的聚类方法进行了改进:加速局部加权k-means (ALW-K-means)和改进密度。此外,还推导了相应的crmer - rao下界的封闭表达式。仿真结果表明,除了极低信噪比(SNR $lt -5$ dB)外,采用上述四种聚类的框架几乎可以在所有信噪比(SNR)区域接近CRLB。四种聚类方法的准确率递减顺序为:WGMD、改进DBSCAN、ALW-K-means和WLMD。
{"title":"A New Heterogeneous Hybrid Massive MIMO Receiver With an Intrinsic Ability of Removing Phase Ambiguity of DOA Estimation via Machine Learning","authors":"Feng Shu;Baihua Shi;Yiwen Chen;Jiatong Bai;Yifan Li;Tingting Liu;Zhu Han;Xiaohu You","doi":"10.1109/TMLCN.2024.3506874","DOIUrl":"https://doi.org/10.1109/TMLCN.2024.3506874","url":null,"abstract":"Massive multiple input multiple output (MIMO) antenna arrays eventuate a huge amount of circuit costs and computational complexity. To satisfy the needs of high precision and low cost in future green wireless communication, the conventional hybrid analog and digital MIMO receive structure emerges a natural choice. But it exists an issue of the phase ambiguity in direction of arrival (DOA) estimation and requires at least two time-slots to complete one-time DOA measurement with the first time-slot generating the set of candidate solutions and the second one to find a true direction by received beamforming over this set, which will lead to a low time-efficiency. To address this problem,a new heterogeneous sub-connected hybrid analog and digital (\u0000<inline-formula> <tex-math>$mathrm {H}^{2}$ </tex-math></inline-formula>\u0000AD) MIMO structure is proposed with an intrinsic ability of removing phase ambiguity, and then a corresponding new framework is developed to implement a rapid high-precision DOA estimation using only single time-slot. The proposed framework consists of two steps: 1) form a set of candidate solutions using existing methods like MUSIC; 2) find the class of the true solutions and compute the class mean. To infer the set of true solutions, we propose two new clustering methods: weight global minimum distance (WGMD) and weight local minimum distance (WLMD). Next, we also enhance two classic clustering methods: accelerating local weighted k-means (ALW-K-means) and improved density. Additionally, the corresponding closed-form expression of Cramer-Rao lower bound (CRLB) is derived. Simulation results show that the proposed frameworks using the above four clustering can approach the CRLB in almost all signal to noise ratio (SNR) regions except for extremely low SNR (SNR \u0000<inline-formula> <tex-math>$lt -5$ </tex-math></inline-formula>\u0000 dB). Four clustering methods have an accuracy decreasing order as follows: WGMD, improved DBSCAN, ALW-K-means and WLMD.","PeriodicalId":100641,"journal":{"name":"IEEE Transactions on Machine Learning in Communications and Networking","volume":"3 ","pages":"17-29"},"PeriodicalIF":0.0,"publicationDate":"2024-11-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10767772","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142844437","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Semi-Supervised Learning via Cross-Prediction-Powered Inference for Wireless Systems 基于交叉预测推理的无线系统半监督学习
Pub Date : 2024-11-20 DOI: 10.1109/TMLCN.2024.3503543
Houssem Sifaou;Osvaldo Simeone
In many wireless application scenarios, acquiring labeled data can be prohibitively costly, requiring complex optimization processes or measurement campaigns. Semi-supervised learning leverages unlabeled samples to augment the available dataset by assigning synthetic labels obtained via machine learning (ML)-based predictions. However, treating the synthetic labels as true labels may yield worse-performing models as compared to models trained using only labeled data. Inspired by the recently developed prediction-powered inference (PPI) framework, this work investigates how to leverage the synthetic labels produced by an ML model, while accounting for the inherent bias concerning true labels. To this end, we first review PPI and its recent extensions, namely tuned PPI and cross-prediction-powered inference (CPPI). Then, we introduce two novel variants of PPI. The first, referred to as tuned CPPI, provides CPPI with an additional degree of freedom in adapting to the quality of the ML-based labels. The second, meta-CPPI (MCPPI), extends tuned CPPI via the joint optimization of the ML labeling models and of the parameters of interest. Finally, we showcase two applications of PPI-based techniques in wireless systems, namely beam alignment based on channel knowledge maps in millimeter-wave systems and received signal strength information-based indoor localization. Simulation results show the advantages of PPI-based techniques over conventional approaches that rely solely on labeled data or that apply standard pseudo-labeling strategies from semi-supervised learning. Furthermore, the proposed tuned CPPI method is observed to guarantee the best performance among all benchmark schemes, especially in the regime of limited labeled data.
在许多无线应用场景中,获取标记数据的成本可能非常高,需要复杂的优化过程或测量活动。半监督学习利用未标记的样本,通过分配基于机器学习(ML)的预测获得的合成标签来增加可用数据集。然而,与仅使用标记数据训练的模型相比,将合成标签视为真实标签可能会产生性能较差的模型。受最近开发的预测驱动推理(PPI)框架的启发,这项工作研究了如何利用ML模型产生的合成标签,同时考虑到关于真实标签的固有偏差。为此,我们首先回顾了PPI及其最近的扩展,即调优PPI和交叉预测驱动推理(CPPI)。然后,我们介绍了PPI的两种新变体。第一种,称为调谐CPPI,为CPPI提供了额外的自由度,以适应基于ml的标签的质量。第二种是元CPPI (MCPPI),它通过ML标记模型和感兴趣参数的联合优化扩展了调整后的CPPI。最后,我们展示了基于ppi技术在无线系统中的两种应用,即毫米波系统中基于信道知识图的波束对准和基于接收信号强度信息的室内定位。仿真结果表明,基于ppi的技术优于仅依赖于标记数据或应用半监督学习的标准伪标记策略的传统方法。此外,所提出的调优CPPI方法在所有基准测试方案中保证了最佳性能,特别是在有限标记数据的情况下。
{"title":"Semi-Supervised Learning via Cross-Prediction-Powered Inference for Wireless Systems","authors":"Houssem Sifaou;Osvaldo Simeone","doi":"10.1109/TMLCN.2024.3503543","DOIUrl":"https://doi.org/10.1109/TMLCN.2024.3503543","url":null,"abstract":"In many wireless application scenarios, acquiring labeled data can be prohibitively costly, requiring complex optimization processes or measurement campaigns. Semi-supervised learning leverages unlabeled samples to augment the available dataset by assigning synthetic labels obtained via machine learning (ML)-based predictions. However, treating the synthetic labels as true labels may yield worse-performing models as compared to models trained using only labeled data. Inspired by the recently developed prediction-powered inference (PPI) framework, this work investigates how to leverage the synthetic labels produced by an ML model, while accounting for the inherent bias concerning true labels. To this end, we first review PPI and its recent extensions, namely tuned PPI and cross-prediction-powered inference (CPPI). Then, we introduce two novel variants of PPI. The first, referred to as tuned CPPI, provides CPPI with an additional degree of freedom in adapting to the quality of the ML-based labels. The second, meta-CPPI (MCPPI), extends tuned CPPI via the joint optimization of the ML labeling models and of the parameters of interest. Finally, we showcase two applications of PPI-based techniques in wireless systems, namely beam alignment based on channel knowledge maps in millimeter-wave systems and received signal strength information-based indoor localization. Simulation results show the advantages of PPI-based techniques over conventional approaches that rely solely on labeled data or that apply standard pseudo-labeling strategies from semi-supervised learning. Furthermore, the proposed tuned CPPI method is observed to guarantee the best performance among all benchmark schemes, especially in the regime of limited labeled data.","PeriodicalId":100641,"journal":{"name":"IEEE Transactions on Machine Learning in Communications and Networking","volume":"3 ","pages":"30-44"},"PeriodicalIF":0.0,"publicationDate":"2024-11-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10758826","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142844291","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
IEEE Transactions on Machine Learning in Communications and Networking
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1