首页 > 最新文献

IEEE Transactions on Machine Learning in Communications and Networking最新文献

英文 中文
Sample-Efficient Multi-Agent DQNs for Scalable Multi-Domain 5G+ Inter-Slice Orchestration 面向可扩展多域 5G+ 片间协调的样本高效多代理 DQN
Pub Date : 2024-06-28 DOI: 10.1109/TMLCN.2024.3420268
Pavlos Doanis;Thrasyvoulos Spyropoulos
Data-driven network slicing has been recently explored as a major driver for beyond 5G networks. Nevertheless, we are still a long way before such solutions are practically applicable in real problems. Most solutions addressing the problem of dynamically placing virtual network function chains (“slices”) on top of a physical topology still face one or more of the following hurdles: (i) they focus on simple slicing setups (e.g. single domain, single slice, simple VNF chains and performance metrics); (ii) solutions based on modern reinforcement learning theory have to deal with astronomically high action spaces, when considering multi-VNF, multi-domain, multi-slice problems; (iii) the training of the algorithms is not particularly data-efficient, which can hinder their practical application given the scarce(r) availability of cellular network related data (as opposed to standard machine learning problems). To this end, we attempt to tackle all the above shortcomings in one common framework. For (i), we propose a generic, queuing network based model that captures the inter-slice orchestration setting, supporting complex VNF chain topologies and end-to-end performance metrics. For (ii), we explore multi-agent DQN algorithms that can reduce action space complexity by orders of magnitude compared to standard DQN. For (iii), we investigate two mechanisms to store to and select from the experience replay buffer, in order to speed up the training of DQN agents. The proposed scheme was validated to outperform both vanilla DQN (by orders of magnitude faster convergence) and static heuristics ( $3times $ cost improvement).
数据驱动的网络切片最近被视为超越 5G 网络的主要驱动力。然而,要将这些解决方案实际应用于现实问题,我们还有很长的路要走。大多数解决在物理拓扑上动态放置虚拟网络功能链("切片")问题的解决方案仍然面临以下一个或多个障碍:(i)它们侧重于简单的切片设置(例如,单域、单切片、简单的网络功能链);(ii)它们缺乏对虚拟网络功能链的分析,因此无法对网络功能链进行分析。单域、单片、简单的 VNF 链和性能指标);(ii) 在考虑多 VNF、多域、多片问题时,基于现代强化学习理论的解决方案必须处理天文数字般的高行动空间;(iii) 算法的训练并不特别具有数据效率,这可能会阻碍其实际应用,因为蜂窝网络相关数据(相对于标准机器学习问题)非常稀缺。为此,我们尝试在一个通用框架内解决上述所有缺点。对于 (i),我们提出了一种基于队列网络的通用模型,该模型可捕捉片间协调设置,支持复杂的 VNF 链拓扑和端到端性能指标。对于 (ii),我们探索了多代理 DQN 算法,与标准 DQN 相比,该算法可将行动空间复杂度降低几个数量级。对于 (iii),我们研究了存储到经验重放缓冲区并从中进行选择的两种机制,以加快 DQN 代理的训练。经过验证,所提出的方案优于普通 DQN(收敛速度快了几个数量级)和静态启发式(成本提高了 3 美元/次)。
{"title":"Sample-Efficient Multi-Agent DQNs for Scalable Multi-Domain 5G+ Inter-Slice Orchestration","authors":"Pavlos Doanis;Thrasyvoulos Spyropoulos","doi":"10.1109/TMLCN.2024.3420268","DOIUrl":"https://doi.org/10.1109/TMLCN.2024.3420268","url":null,"abstract":"Data-driven network slicing has been recently explored as a major driver for beyond 5G networks. Nevertheless, we are still a long way before such solutions are practically applicable in real problems. Most solutions addressing the problem of dynamically placing virtual network function chains (“slices”) on top of a physical topology still face one or more of the following hurdles: (i) they focus on simple slicing setups (e.g. single domain, single slice, simple VNF chains and performance metrics); (ii) solutions based on modern reinforcement learning theory have to deal with astronomically high action spaces, when considering multi-VNF, multi-domain, multi-slice problems; (iii) the training of the algorithms is not particularly data-efficient, which can hinder their practical application given the scarce(r) availability of cellular network related data (as opposed to standard machine learning problems). To this end, we attempt to tackle all the above shortcomings in one common framework. For (i), we propose a generic, queuing network based model that captures the inter-slice orchestration setting, supporting complex VNF chain topologies and end-to-end performance metrics. For (ii), we explore multi-agent DQN algorithms that can reduce action space complexity by orders of magnitude compared to standard DQN. For (iii), we investigate two mechanisms to store to and select from the experience replay buffer, in order to speed up the training of DQN agents. The proposed scheme was validated to outperform both vanilla DQN (by orders of magnitude faster convergence) and static heuristics (\u0000<inline-formula> <tex-math>$3times $ </tex-math></inline-formula>\u0000 cost improvement).","PeriodicalId":100641,"journal":{"name":"IEEE Transactions on Machine Learning in Communications and Networking","volume":"2 ","pages":"956-977"},"PeriodicalIF":0.0,"publicationDate":"2024-06-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10577096","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141618103","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Learning End-to-End Hybrid Precoding for Multi-User mmWave Mobile System With GNNs 利用 GNN 为多用户毫米波移动系统学习端到端混合精确编码
Pub Date : 2024-06-28 DOI: 10.1109/TMLCN.2024.3420269
Ruiming Wang;Chenyang Yang;Shengqian Han;Jiajun Wu;Shuangfeng Han;Xiaoyun Wang
Hybrid precoding is an efficient technique for achieving high rates at a low cost in millimeter wave (mmWave) multi-antenna systems. Many research efforts have explored the use of deep learning to optimize hybrid precoding, particularly in static channel scenarios. However, in mobile communication systems, the performance of mmWave communication severely degrades due to the channel aging effect. Furthermore, the learned precoding policy should be adaptable to dynamic environments, such as variations in the number of active users, to avoid the need for re-training. In this paper, resorting to the proactive optimization approach, we propose an end-to-end learning method to learn the downlink multi-user analog and digital hybrid precoders directly from the received uplink sounding reference signals, without explicit channel estimation and prediction. We take into account the frame structure used in practical cellular systems and design a parallel proactive optimization network (P-PONet) to concurrently learn hybrid precoding for multiple downlink subframes. The P-PONet consists of several graph neural networks, which enable the generalizability across different system scales. Simulation results show that the proposed P-PONet outperforms existing methods in terms of sum-rate performance and sounding overhead, and is generalizable to various system configurations.
混合预编码是毫米波(mmWave)多天线系统中以低成本实现高速率的一种高效技术。许多研究工作都在探索利用深度学习来优化混合预编码,尤其是在静态信道场景中。然而,在移动通信系统中,由于信道老化效应,毫米波通信的性能严重下降。此外,学习到的预编码策略应能适应动态环境,如活跃用户数量的变化,以避免重新训练的需要。本文采用主动优化方法,提出了一种端到端学习方法,可直接从接收到的上行探测参考信号中学习下行多用户模拟和数字混合前置编码器,而无需明确的信道估计和预测。我们考虑了实际蜂窝系统中使用的帧结构,并设计了一个并行主动优化网络(P-PONet)来同时学习多个下行链路子帧的混合前置编码。P-PONet 由多个图神经网络组成,可在不同系统规模下通用。仿真结果表明,所提出的 P-PONet 在总和速率性能和传音开销方面优于现有方法,并可通用于各种系统配置。
{"title":"Learning End-to-End Hybrid Precoding for Multi-User mmWave Mobile System With GNNs","authors":"Ruiming Wang;Chenyang Yang;Shengqian Han;Jiajun Wu;Shuangfeng Han;Xiaoyun Wang","doi":"10.1109/TMLCN.2024.3420269","DOIUrl":"https://doi.org/10.1109/TMLCN.2024.3420269","url":null,"abstract":"Hybrid precoding is an efficient technique for achieving high rates at a low cost in millimeter wave (mmWave) multi-antenna systems. Many research efforts have explored the use of deep learning to optimize hybrid precoding, particularly in static channel scenarios. However, in mobile communication systems, the performance of mmWave communication severely degrades due to the channel aging effect. Furthermore, the learned precoding policy should be adaptable to dynamic environments, such as variations in the number of active users, to avoid the need for re-training. In this paper, resorting to the proactive optimization approach, we propose an end-to-end learning method to learn the downlink multi-user analog and digital hybrid precoders directly from the received uplink sounding reference signals, without explicit channel estimation and prediction. We take into account the frame structure used in practical cellular systems and design a parallel proactive optimization network (P-PONet) to concurrently learn hybrid precoding for multiple downlink subframes. The P-PONet consists of several graph neural networks, which enable the generalizability across different system scales. Simulation results show that the proposed P-PONet outperforms existing methods in terms of sum-rate performance and sounding overhead, and is generalizable to various system configurations.","PeriodicalId":100641,"journal":{"name":"IEEE Transactions on Machine Learning in Communications and Networking","volume":"2 ","pages":"978-993"},"PeriodicalIF":0.0,"publicationDate":"2024-06-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10577095","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141618063","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Learning Energy-Efficient Transmitter Configurations for Massive MIMO Beamforming 学习大规模多输入多输出波束成形的高能效发射机配置
Pub Date : 2024-06-27 DOI: 10.1109/TMLCN.2024.3419728
Hamed Hojatian;Zoubeir Mlika;Jérémy Nadal;Jean-François Frigon;François Leduc-Primeau
Hybrid beamforming (HBF) and antenna selection are promising techniques for improving the energy efficiency (EE) of massive multiple-input multiple-output (mMIMO) systems. However, the transmitter architecture may contain several parameters that need to be optimized, such as the power allocated to the antennas and the connections between the antennas and the radio frequency chains. Therefore, finding the optimal transmitter architecture requires solving a non-convex mixed integer problem in a large search space. In this paper, we consider the problem of maximizing the EE of fully digital precoder (FDP) and HBF transmitters. First, we propose an energy model for different beamforming structures. Then, based on the proposed energy model, we develop a self-supervised learning (SSL) method to maximize the EE by designing the transmitter configuration for FDP and HBF. The proposed deep neural networks can provide different trade-offs between spectral efficiency and energy consumption while adapting to different numbers of active users. Finally, towards obtaining a system that can be trained using in-the-field measurements, we investigate the ability of the model to be trained exclusively using imperfect channel state information (CSI), both for the input to the deep learning model and for the calculation of the loss function. Simulation results show that the proposed solutions can outperform conventional methods in terms of EE while being trained with imperfect CSI. Furthermore, we show that the proposed solutions are less complex and more robust to noise than conventional methods.
混合波束成形(HBF)和天线选择是提高大规模多输入多输出(mMIMO)系统能效(EE)的有效技术。然而,发射机架构可能包含多个需要优化的参数,例如分配给天线的功率以及天线与射频链之间的连接。因此,要找到最佳的发射机架构,需要在一个很大的搜索空间内解决一个非凸混合整数问题。在本文中,我们考虑了最大化全数字前置编码器(FDP)和 HBF 发射机 EE 的问题。首先,我们提出了不同波束成形结构的能量模型。然后,基于所提出的能量模型,我们开发了一种自监督学习(SSL)方法,通过设计 FDP 和 HBF 的发射机配置来最大化 EE。所提出的深度神经网络可以在频谱效率和能耗之间进行不同的权衡,同时适应不同数量的活跃用户。最后,为了获得一个可利用现场测量进行训练的系统,我们研究了完全利用不完善的信道状态信息(CSI)来训练模型的能力,包括深度学习模型的输入和损失函数的计算。仿真结果表明,在使用不完美 CSI 进行训练时,所提出的解决方案在 EE 方面优于传统方法。此外,我们还表明,与传统方法相比,所提出的解决方案复杂度更低,对噪声的鲁棒性更高。
{"title":"Learning Energy-Efficient Transmitter Configurations for Massive MIMO Beamforming","authors":"Hamed Hojatian;Zoubeir Mlika;Jérémy Nadal;Jean-François Frigon;François Leduc-Primeau","doi":"10.1109/TMLCN.2024.3419728","DOIUrl":"https://doi.org/10.1109/TMLCN.2024.3419728","url":null,"abstract":"Hybrid beamforming (HBF) and antenna selection are promising techniques for improving the energy efficiency (EE) of massive multiple-input multiple-output (mMIMO) systems. However, the transmitter architecture may contain several parameters that need to be optimized, such as the power allocated to the antennas and the connections between the antennas and the radio frequency chains. Therefore, finding the optimal transmitter architecture requires solving a non-convex mixed integer problem in a large search space. In this paper, we consider the problem of maximizing the EE of fully digital precoder (FDP) and HBF transmitters. First, we propose an energy model for different beamforming structures. Then, based on the proposed energy model, we develop a self-supervised learning (SSL) method to maximize the EE by designing the transmitter configuration for FDP and HBF. The proposed deep neural networks can provide different trade-offs between spectral efficiency and energy consumption while adapting to different numbers of active users. Finally, towards obtaining a system that can be trained using in-the-field measurements, we investigate the ability of the model to be trained exclusively using imperfect channel state information (CSI), both for the input to the deep learning model and for the calculation of the loss function. Simulation results show that the proposed solutions can outperform conventional methods in terms of EE while being trained with imperfect CSI. Furthermore, we show that the proposed solutions are less complex and more robust to noise than conventional methods.","PeriodicalId":100641,"journal":{"name":"IEEE Transactions on Machine Learning in Communications and Networking","volume":"2 ","pages":"939-955"},"PeriodicalIF":0.0,"publicationDate":"2024-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10574840","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141618104","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
GenAI-Based Models for NGSO Satellites Interference Detection 基于 GenAI 的 NGSO 卫星干扰探测模型
Pub Date : 2024-06-25 DOI: 10.1109/TMLCN.2024.3418933
Almoatssimbillah Saifaldawla;Flor Ortiz;Eva Lagunas;Abuzar B. M. Adam;Symeon Chatzinotas
Recent advancements in satellite communications have highlighted the challenge of interference detection, especially with the new generation of non-geostationary orbit satellites (NGSOs) that share the same frequency bands as legacy geostationary orbit satellites (GSOs). Despite existing radio regulations during the filing stage, this heightened congestion in the spectrum is likely to lead to instances of interference during real-time operations. This paper addresses the NGSO-to-GSO interference problem by proposing advanced artificial intelligence (AI) models to detect interference events. In particular, we focus on the downlink interference case, where signals from low-Earth orbit satellites (LEOs) potentially impact the signals received at the GSO ground stations (GGSs). In addition to the widely used autoencoder-based models (AEs), we design, develop, and train two generative AI-based models (GenAI), which are a variational autoencoder (VAE) and a transformer-based interference detector (TrID). These models generate samples of the expected GSO signal, whose error with respect to the input signal is used to flag interference. Actual satellite positions, trajectories, and realistic system parameters are used to emulate the interference scenarios and validate the proposed models. Numerical evaluation reveals that the models exhibit higher accuracy for detecting interference in the time-domain signal representations compared to the frequency-domain representations. Furthermore, the results demonstrate that TrID significantly outperforms the other models as well as the traditional energy detector (ED) approach, showing an increase of up to 31.23% in interference detection accuracy, offering an innovative and efficient solution to a pressing challenge in satellite communications.
卫星通信领域的最新进展凸显了干扰探测的挑战,特别是新一代非地球静止轨道卫星(NGSO)与传统地球静止轨道卫星(GSO)共享相同的频段。尽管在申报阶段已有无线电管理条例,但频谱的高度拥堵很可能导致实时运行期间的干扰事件。本文通过提出先进的人工智能 (AI) 模型来检测干扰事件,从而解决 NGSO 对 GSO 的干扰问题。我们尤其关注下行链路干扰情况,在这种情况下,来自低地轨道卫星 (LEO) 的信号可能会影响 GSO 地面站 (GGS) 接收到的信号。除了广泛使用的基于自动编码器的模型(AE)外,我们还设计、开发并训练了两个基于人工智能的生成模型(GenAI),即变异自动编码器(VAE)和基于变压器的干扰检测器(TrID)。这些模型生成预期 GSO 信号的样本,其与输入信号的误差用于标记干扰。实际的卫星位置、轨迹和现实的系统参数被用来模拟干扰场景并验证所提出的模型。数值评估结果表明,与频域信号表示法相比,这些模型在时域信号表示法中检测干扰的准确度更高。此外,结果表明 TrID 明显优于其他模型和传统的能量检测器(ED)方法,干扰检测精度提高了 31.23%,为卫星通信领域面临的紧迫挑战提供了创新而高效的解决方案。
{"title":"GenAI-Based Models for NGSO Satellites Interference Detection","authors":"Almoatssimbillah Saifaldawla;Flor Ortiz;Eva Lagunas;Abuzar B. M. Adam;Symeon Chatzinotas","doi":"10.1109/TMLCN.2024.3418933","DOIUrl":"https://doi.org/10.1109/TMLCN.2024.3418933","url":null,"abstract":"Recent advancements in satellite communications have highlighted the challenge of interference detection, especially with the new generation of non-geostationary orbit satellites (NGSOs) that share the same frequency bands as legacy geostationary orbit satellites (GSOs). Despite existing radio regulations during the filing stage, this heightened congestion in the spectrum is likely to lead to instances of interference during real-time operations. This paper addresses the NGSO-to-GSO interference problem by proposing advanced artificial intelligence (AI) models to detect interference events. In particular, we focus on the downlink interference case, where signals from low-Earth orbit satellites (LEOs) potentially impact the signals received at the GSO ground stations (GGSs). In addition to the widely used autoencoder-based models (AEs), we design, develop, and train two generative AI-based models (GenAI), which are a variational autoencoder (VAE) and a transformer-based interference detector (TrID). These models generate samples of the expected GSO signal, whose error with respect to the input signal is used to flag interference. Actual satellite positions, trajectories, and realistic system parameters are used to emulate the interference scenarios and validate the proposed models. Numerical evaluation reveals that the models exhibit higher accuracy for detecting interference in the time-domain signal representations compared to the frequency-domain representations. Furthermore, the results demonstrate that TrID significantly outperforms the other models as well as the traditional energy detector (ED) approach, showing an increase of up to 31.23% in interference detection accuracy, offering an innovative and efficient solution to a pressing challenge in satellite communications.","PeriodicalId":100641,"journal":{"name":"IEEE Transactions on Machine Learning in Communications and Networking","volume":"2 ","pages":"904-924"},"PeriodicalIF":0.0,"publicationDate":"2024-06-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10570488","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141500330","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Incremental Adversarial Learning for Polymorphic Attack Detection 多态攻击检测的增量对抗学习
Pub Date : 2024-06-24 DOI: 10.1109/TMLCN.2024.3418756
Ulya Sabeel;Shahram Shah Heydari;Khalil El-Khatib;Khalid Elgazzar
AI-based Network Intrusion Detection Systems (NIDS) provide effective mechanisms for cybersecurity analysts to gain insights and thwart several network attacks. Although current IDS can identify known/typical attacks with high accuracy, current research shows that such systems perform poorly when facing atypical and dynamically changing (polymorphic) attacks. In this paper, we focus on improving detection capability of the IDS for atypical and polymorphic network attacks. Our system generates adversarial polymorphic attacks against the IDS to examine its performance and incrementally retrains it to strengthen its detection of new attacks, specifically for minority attack samples in the input data. The employed attack quality analysis ensures that the adversarial atypical/polymorphic attacks generated through our system resemble original network attacks. We showcase the high performance of the IDS that we have proposed by training it using the CICIDS2017 and CICIoT2023 benchmark datasets and evaluating its performance against several atypical/polymorphic attack flows. The results indicate that the proposed technique, through adaptive training, learns the pattern of dynamically changing atypical/polymorphic attacks, identifies such attacks with approximately 90% balanced accuracy for most of the cases, and surpasses various state-of-the-art detection and class balancing techniques.
基于人工智能的网络入侵检测系统(NIDS)为网络安全分析人员提供了有效的机制,使他们能够深入了解并挫败多种网络攻击。尽管目前的 IDS 能够高精度地识别已知/典型攻击,但目前的研究表明,这类系统在面对非典型和动态变化(多态)攻击时表现不佳。在本文中,我们的重点是提高 IDS 对非典型和多态网络攻击的检测能力。我们的系统针对 IDS 生成对抗性多态攻击,以检验其性能,并对其进行增量再训练,以加强对新攻击的检测,特别是对输入数据中少数攻击样本的检测。所采用的攻击质量分析确保通过我们的系统生成的对抗性非典型/多态攻击与原始网络攻击相似。我们利用 CICIDS2017 和 CICIoT2023 基准数据集对 IDS 进行了训练,并针对若干非典型/多态攻击流对其性能进行了评估,从而展示了我们提出的 IDS 的高性能。结果表明,通过自适应训练,所提出的技术能够学习动态变化的非典型/多态攻击模式,在大多数情况下能以约 90% 的均衡准确率识别此类攻击,并超越了各种最先进的检测和类均衡技术。
{"title":"Incremental Adversarial Learning for Polymorphic Attack Detection","authors":"Ulya Sabeel;Shahram Shah Heydari;Khalil El-Khatib;Khalid Elgazzar","doi":"10.1109/TMLCN.2024.3418756","DOIUrl":"https://doi.org/10.1109/TMLCN.2024.3418756","url":null,"abstract":"AI-based Network Intrusion Detection Systems (NIDS) provide effective mechanisms for cybersecurity analysts to gain insights and thwart several network attacks. Although current IDS can identify known/typical attacks with high accuracy, current research shows that such systems perform poorly when facing atypical and dynamically changing (polymorphic) attacks. In this paper, we focus on improving detection capability of the IDS for atypical and polymorphic network attacks. Our system generates adversarial polymorphic attacks against the IDS to examine its performance and incrementally retrains it to strengthen its detection of new attacks, specifically for minority attack samples in the input data. The employed attack quality analysis ensures that the adversarial atypical/polymorphic attacks generated through our system resemble original network attacks. We showcase the high performance of the IDS that we have proposed by training it using the CICIDS2017 and CICIoT2023 benchmark datasets and evaluating its performance against several atypical/polymorphic attack flows. The results indicate that the proposed technique, through adaptive training, learns the pattern of dynamically changing atypical/polymorphic attacks, identifies such attacks with approximately 90% balanced accuracy for most of the cases, and surpasses various state-of-the-art detection and class balancing techniques.","PeriodicalId":100641,"journal":{"name":"IEEE Transactions on Machine Learning in Communications and Networking","volume":"2 ","pages":"869-887"},"PeriodicalIF":0.0,"publicationDate":"2024-06-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10570491","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141494827","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Machine Learning Aided Reference-Tone-Based Phase Noise Correction Framework for Fiber-Wireless Systems 基于参考音的机器学习辅助光纤无线系统相位噪声校正框架
Pub Date : 2024-06-24 DOI: 10.1109/TMLCN.2024.3418748
Guo Hao Thng;Said Mikki
In recent years, the research involving the use of machine learning in the field of communication networks have shown promising results, in particular, improving receiver sensitivity against noise and link impairment. The proposal of analog radio-over-fiber fronthaul solutions simplifies the overall base station configuration by generating wireless signals at the desired transmission frequency, directly after photodiode heterodyne detection, without requiring additional frequency upconversion components. However, analog radio-over-fiber signals is more susceptible to nonlinear distortions originating from the optical transmission system. This paper explores the use of machine learning in an analog radio-over-fiber link, improving receiver sensitivity in the presence of phase noise. The machine learning algorithm is implemented at the receiver. To evaluate the feasibility of the proposed machine learning based phase noise correction approach, software simulations were conducted to collect data needed for machine leanring algorithm training. Initial findings suggests that the proposed machine-learning-based receiver’s can perform close to conventional heterodyned-based receivers in terms of detection accuracy, exhibiting great tolerance against phase-induced noise, with a symbol error rate improvement from $10^{-2}$ to $10^{-5}$ , using a relatively simple machine learning algorithm with only 3 hidden layers consisting of fully connected feedforward neural networks.
近年来,涉及在通信网络领域使用机器学习的研究取得了可喜的成果,特别是在提高接收器对噪声和链路损伤的灵敏度方面。光纤模拟无线电前传解决方案的提出,简化了整个基站的配置,在光电二极管外差检测后直接生成所需传输频率的无线信号,无需额外的频率上变频组件。然而,光纤模拟无线电信号更容易受到光传输系统非线性失真的影响。本文探讨了机器学习在模拟光纤无线电链路中的应用,以提高接收器在相位噪声情况下的灵敏度。机器学习算法在接收器上实现。为评估所提出的基于机器学习的相位噪声校正方法的可行性,进行了软件模拟,以收集机器学习算法训练所需的数据。初步研究结果表明,所提出的基于机器学习的接收器在检测精度方面的表现接近于传统的基于异调的接收器,对相位噪声的容忍度很高,符号错误率从 10^{-2}$ 提高到 10^{-5}$ ,使用的机器学习算法相对简单,只有 3 个由全连接前馈神经网络组成的隐藏层。
{"title":"A Machine Learning Aided Reference-Tone-Based Phase Noise Correction Framework for Fiber-Wireless Systems","authors":"Guo Hao Thng;Said Mikki","doi":"10.1109/TMLCN.2024.3418748","DOIUrl":"https://doi.org/10.1109/TMLCN.2024.3418748","url":null,"abstract":"In recent years, the research involving the use of machine learning in the field of communication networks have shown promising results, in particular, improving receiver sensitivity against noise and link impairment. The proposal of analog radio-over-fiber fronthaul solutions simplifies the overall base station configuration by generating wireless signals at the desired transmission frequency, directly after photodiode heterodyne detection, without requiring additional frequency upconversion components. However, analog radio-over-fiber signals is more susceptible to nonlinear distortions originating from the optical transmission system. This paper explores the use of machine learning in an analog radio-over-fiber link, improving receiver sensitivity in the presence of phase noise. The machine learning algorithm is implemented at the receiver. To evaluate the feasibility of the proposed machine learning based phase noise correction approach, software simulations were conducted to collect data needed for machine leanring algorithm training. Initial findings suggests that the proposed machine-learning-based receiver’s can perform close to conventional heterodyned-based receivers in terms of detection accuracy, exhibiting great tolerance against phase-induced noise, with a symbol error rate improvement from \u0000<inline-formula> <tex-math>$10^{-2}$ </tex-math></inline-formula>\u0000 to \u0000<inline-formula> <tex-math>$10^{-5}$ </tex-math></inline-formula>\u0000, using a relatively simple machine learning algorithm with only 3 hidden layers consisting of fully connected feedforward neural networks.","PeriodicalId":100641,"journal":{"name":"IEEE Transactions on Machine Learning in Communications and Networking","volume":"2 ","pages":"888-903"},"PeriodicalIF":0.0,"publicationDate":"2024-06-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10570478","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141494826","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Physical Layer Spoof Detection and Authentication for IoT Devices Using Deep Learning Methods 利用深度学习方法进行物联网设备物理层欺骗检测和认证
Pub Date : 2024-06-21 DOI: 10.1109/TMLCN.2024.3417806
Da Huang;Akram Al-Hourani
The proliferation of the Internet of Things (IoT) has created significant opportunities for future telecommunications. A popular category of IoT devices is oriented toward low-cost and low-power applications. However, certain aspects of such category, including the authentication process, remain inadequately investigated against cyber vulnerabilities. This is caused by the inherent trade-off between device complexity and security rigor. In this work, we propose an authentication method based on radio frequency fingerprinting (RFF) using deep learning. This method can be implemented on the base station side without increasing the complexity of the IoT devices. Specifically, we propose four representation modalities based on continuous wavelet transform (CWT) to exploit tempo-spectral radio fingerprints. Accordingly, we utilize the generative adversarial network (GAN) and convolutional neural network (CNN) for spoof detection and authentication. For empirical validation, we consider the widely popular LoRa system with a focus on the preamble of the radio frame. The presented experimental test involves 20 off-the-shelf LoRa modules to demonstrate the feasibility of the proposed approach, showing reliable detection results of spoofing devices and high-level accuracy in authentication of 92.4%.
物联网(IoT)的普及为未来的电信业带来了巨大机遇。一类流行的物联网设备面向低成本和低功耗应用。然而,这类设备的某些方面,包括身份验证过程,仍未针对网络漏洞进行充分调查。造成这种情况的原因是设备的复杂性和安全性之间的固有权衡。在这项工作中,我们利用深度学习提出了一种基于射频指纹(RFF)的身份验证方法。这种方法可以在基站端实现,而不会增加物联网设备的复杂性。具体来说,我们提出了四种基于连续小波变换(CWT)的表示模式,以利用节奏-频谱无线电指纹。相应地,我们利用生成式对抗网络(GAN)和卷积神经网络(CNN)进行欺骗检测和验证。为了进行经验验证,我们考虑了广泛流行的 LoRa 系统,重点是无线电帧的前导码。所提交的实验测试涉及 20 个现成的 LoRa 模块,以证明所提方法的可行性,结果显示欺骗设备的检测结果可靠,认证准确率高达 92.4%。
{"title":"Physical Layer Spoof Detection and Authentication for IoT Devices Using Deep Learning Methods","authors":"Da Huang;Akram Al-Hourani","doi":"10.1109/TMLCN.2024.3417806","DOIUrl":"https://doi.org/10.1109/TMLCN.2024.3417806","url":null,"abstract":"The proliferation of the Internet of Things (IoT) has created significant opportunities for future telecommunications. A popular category of IoT devices is oriented toward low-cost and low-power applications. However, certain aspects of such category, including the authentication process, remain inadequately investigated against cyber vulnerabilities. This is caused by the inherent trade-off between device complexity and security rigor. In this work, we propose an authentication method based on radio frequency fingerprinting (RFF) using deep learning. This method can be implemented on the base station side without increasing the complexity of the IoT devices. Specifically, we propose four representation modalities based on continuous wavelet transform (CWT) to exploit tempo-spectral radio fingerprints. Accordingly, we utilize the generative adversarial network (GAN) and convolutional neural network (CNN) for spoof detection and authentication. For empirical validation, we consider the widely popular LoRa system with a focus on the preamble of the radio frame. The presented experimental test involves 20 off-the-shelf LoRa modules to demonstrate the feasibility of the proposed approach, showing reliable detection results of spoofing devices and high-level accuracy in authentication of 92.4%.","PeriodicalId":100641,"journal":{"name":"IEEE Transactions on Machine Learning in Communications and Networking","volume":"2 ","pages":"841-854"},"PeriodicalIF":0.0,"publicationDate":"2024-06-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10568158","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141474855","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Game Strategies for Data Transfer Infrastructures Against ML-Profile Exploits 数据传输基础设施与 ML-Profile漏洞的博弈策略
Pub Date : 2024-06-21 DOI: 10.1109/TMLCN.2024.3417889
Nageswara S. V. Rao;Chris Y. T. Ma;Fei He
Data transfer infrastructures composed of Data Transfer Nodes (DTN) are critical to meeting distributed computing and storage demands of clouds, data repositories, and complexes of supercomputers and instruments. The infrastructure’s throughput profile, estimated as a function of the connection round trip time using Machine Learning (ML) methods, is an indicator of its operational state, and has been utilized for monitoring, diagnosis and optimization purposes. We show that the inherent statistical variations and precision of throughput profiles estimated by ML methods can be exploited for unauthorized use of DTNs’ computing and network capacity. We present a game theoretic formulation that captures the cost-benefit trade-offs between an attacker that attempts to hide under the profile’s statistical variations and a provider that attempts to balance compromise detection with the cost of throughput measurements. The Nash equilibrium conditions adapted to this game provide qualitative insights and bounds for the success probabilities of the attacker and provider, by utilizing the generalization equation of ML-estimate. We present experimental results that illustrate this game wherein a significant portion of DTN computing capacity is compromised without being detected by an attacker that exploits the ML estimate properties.
由数据传输节点(DTN)组成的数据传输基础设施对于满足云、数据存储库以及超级计算机和仪器群的分布式计算和存储需求至关重要。使用机器学习(ML)方法估算的基础设施吞吐量曲线是连接往返时间的函数,是其运行状态的指标,已被用于监控、诊断和优化目的。我们的研究表明,可以利用 ML 方法估算的吞吐量曲线的固有统计变化和精度,在未经授权的情况下使用 DTN 的计算和网络容量。我们提出了一个博弈论公式,它捕捉到了试图隐藏在吞吐量曲线统计变化下的攻击者与试图平衡破坏检测与吞吐量测量成本的提供者之间的成本效益权衡。通过利用 ML-estimate 的广义方程,适应该博弈的纳什均衡条件为攻击者和提供者的成功概率提供了定性的见解和界限。我们展示的实验结果说明了这种博弈,在这种博弈中,利用 ML 估计特性的攻击者可以在不被检测到的情况下破坏大部分 DTN 计算能力。
{"title":"Game Strategies for Data Transfer Infrastructures Against ML-Profile Exploits","authors":"Nageswara S. V. Rao;Chris Y. T. Ma;Fei He","doi":"10.1109/TMLCN.2024.3417889","DOIUrl":"https://doi.org/10.1109/TMLCN.2024.3417889","url":null,"abstract":"Data transfer infrastructures composed of Data Transfer Nodes (DTN) are critical to meeting distributed computing and storage demands of clouds, data repositories, and complexes of supercomputers and instruments. The infrastructure’s throughput profile, estimated as a function of the connection round trip time using Machine Learning (ML) methods, is an indicator of its operational state, and has been utilized for monitoring, diagnosis and optimization purposes. We show that the inherent statistical variations and precision of throughput profiles estimated by ML methods can be exploited for unauthorized use of DTNs’ computing and network capacity. We present a game theoretic formulation that captures the cost-benefit trade-offs between an attacker that attempts to hide under the profile’s statistical variations and a provider that attempts to balance compromise detection with the cost of throughput measurements. The Nash equilibrium conditions adapted to this game provide qualitative insights and bounds for the success probabilities of the attacker and provider, by utilizing the generalization equation of ML-estimate. We present experimental results that illustrate this game wherein a significant portion of DTN computing capacity is compromised without being detected by an attacker that exploits the ML estimate properties.","PeriodicalId":100641,"journal":{"name":"IEEE Transactions on Machine Learning in Communications and Networking","volume":"2 ","pages":"925-938"},"PeriodicalIF":0.0,"publicationDate":"2024-06-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10568232","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141539198","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Reservoir Computing-Based Digital Self-Interference Cancellation for In-Band Full-Duplex Radios 基于储层计算的带内全双工无线电数字自干扰消除技术
Pub Date : 2024-06-13 DOI: 10.1109/TMLCN.2024.3414296
Zhikai Liu;Haifeng Luo;Tharmalingam Ratnarajah
Digital self-interference cancellation (DSIC) has become a pivotal strategy for implementing in-band full-duplex (IBFD) radios to overcome the hurdles posed by residual self-interference that persist after propagation and analog domain cancellation. This work proposes a novel reservoir computing-based DSIC (RC-DSIC) technique and compares it with traditional polynomial-based (PL-DSIC) and various existing neural network-based (NN-DSIC) approaches. We begin by delineating the structure of the RC and exploring its capability to address the DSIC task, highlighting its potential advantages over current methodologies. Subsequently, we examine the computational complexity of these approaches and undertake extensive simulations to compare the proposed RC-DSIC approach against PL-DSIC and existing NN-DSIC schemes. Our results reveal that the RC-DSIC scheme attains 99.84% of the performance offered by PL-based DSIC algorithms while requiring only 1.51% of the computational demand. Compared to many existing NN-DSIC schemes, the RC-DSIC method achieves at least 99.73% of its performance with no more than 36.61% of the computational demand. This performance justifies the viability of RC-DSIC as an effective and efficient solution for DSIC in IBFD, striking it is a better implementation method in terms of computational simplicity.
数字自干扰消除(DSIC)已成为实现带内全双工(IBFD)无线电的关键策略,以克服传播和模拟域消除后持续存在的残余自干扰所带来的障碍。这项研究提出了一种基于水库计算的新型 DSIC(RC-DSIC)技术,并将其与传统的多项式 DSIC(PL-DSIC)和现有的各种基于神经网络的 DSIC(NN-DSIC)方法进行了比较。我们首先描述了 RC 的结构,并探讨了它处理 DSIC 任务的能力,突出了它与现有方法相比的潜在优势。随后,我们研究了这些方法的计算复杂性,并进行了大量仿真,将所提出的 RC-DSIC 方法与 PL-DSIC 和现有的 NN-DSIC 方案进行了比较。我们的结果表明,RC-DSIC 方案的性能达到了基于 PL 的 DSIC 算法的 99.84%,而计算需求仅为 PL 的 1.51%。与许多现有的 NN-DSIC 方案相比,RC-DSIC 方法的性能至少达到了 99.73%,而计算需求却不超过 36.61%。这一性能证明了 RC-DSIC 作为 IBFD 中 DSIC 的有效解决方案的可行性,它在计算简便性方面是一种更好的实现方法。
{"title":"Reservoir Computing-Based Digital Self-Interference Cancellation for In-Band Full-Duplex Radios","authors":"Zhikai Liu;Haifeng Luo;Tharmalingam Ratnarajah","doi":"10.1109/TMLCN.2024.3414296","DOIUrl":"https://doi.org/10.1109/TMLCN.2024.3414296","url":null,"abstract":"Digital self-interference cancellation (DSIC) has become a pivotal strategy for implementing in-band full-duplex (IBFD) radios to overcome the hurdles posed by residual self-interference that persist after propagation and analog domain cancellation. This work proposes a novel reservoir computing-based DSIC (RC-DSIC) technique and compares it with traditional polynomial-based (PL-DSIC) and various existing neural network-based (NN-DSIC) approaches. We begin by delineating the structure of the RC and exploring its capability to address the DSIC task, highlighting its potential advantages over current methodologies. Subsequently, we examine the computational complexity of these approaches and undertake extensive simulations to compare the proposed RC-DSIC approach against PL-DSIC and existing NN-DSIC schemes. Our results reveal that the RC-DSIC scheme attains 99.84% of the performance offered by PL-based DSIC algorithms while requiring only 1.51% of the computational demand. Compared to many existing NN-DSIC schemes, the RC-DSIC method achieves at least 99.73% of its performance with no more than 36.61% of the computational demand. This performance justifies the viability of RC-DSIC as an effective and efficient solution for DSIC in IBFD, striking it is a better implementation method in terms of computational simplicity.","PeriodicalId":100641,"journal":{"name":"IEEE Transactions on Machine Learning in Communications and Networking","volume":"2 ","pages":"855-868"},"PeriodicalIF":0.0,"publicationDate":"2024-06-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10556632","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141474856","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Deep Conditional Generative Adversarial Networks for Efficient Channel Estimation in AmBC Systems 用于 AmBC 系统高效信道估计的深度条件生成对抗网络
Pub Date : 2024-06-12 DOI: 10.1109/TMLCN.2024.3413669
Shayan Zargari;Chintha Tellambura;Amine Maaref;Geoffrey Ye Li
In ambient backscatter communication (AmBC), battery-free devices (tags) harvest energy from ambient radio frequency (RF) signals and communicate with readers. Although reliable channel estimation (CE) is critical, classical pilot-based estimators tend to perform poorly. To address this challenge, we treat CE as a denoising problem using conditional generative adversarial networks (CGANs). A three-dimensional (3D) denoising block leverages spatial and temporal characteristics of pilot signals, considering both real and imaginary components of channel matrices. The proposed CGAN estimator is extensively evaluated against traditional estimators like minimum mean-squared error (MMSE), least squares (LS), convolutional neural network (CNN), CNN-based deep residual learning denoiser (CRLD), and blind estimation. Simulation results show 82% gain of the proposed estimator over CRLD and MMSE estimators at an SNR of 5 dB. Moreover, it has advanced learning capabilities and accurately replicates complex channel characteristics.
在环境反向散射通信(AmBC)中,无电池设备(标签)从环境射频(RF)信号中获取能量并与阅读器通信。尽管可靠的信道估计(CE)至关重要,但基于先导的经典估计器往往表现不佳。为了应对这一挑战,我们使用条件生成对抗网络(CGAN)将信道估计视为去噪问题。三维(3D)去噪块利用先导信号的空间和时间特性,同时考虑信道矩阵的实分量和虚分量。针对最小均方误差(MMSE)、最小二乘法(LS)、卷积神经网络(CNN)、基于 CNN 的深度残差学习去噪器(CRLD)和盲估计等传统估计器,对所提出的 CGAN 估计器进行了广泛评估。仿真结果表明,在信噪比为 5 dB 时,与 CRLD 和 MMSE 相比,所提出的估计器的增益达 82%。此外,它还具有先进的学习能力,能准确复制复杂的信道特性。
{"title":"Deep Conditional Generative Adversarial Networks for Efficient Channel Estimation in AmBC Systems","authors":"Shayan Zargari;Chintha Tellambura;Amine Maaref;Geoffrey Ye Li","doi":"10.1109/TMLCN.2024.3413669","DOIUrl":"https://doi.org/10.1109/TMLCN.2024.3413669","url":null,"abstract":"In ambient backscatter communication (AmBC), battery-free devices (tags) harvest energy from ambient radio frequency (RF) signals and communicate with readers. Although reliable channel estimation (CE) is critical, classical pilot-based estimators tend to perform poorly. To address this challenge, we treat CE as a denoising problem using conditional generative adversarial networks (CGANs). A three-dimensional (3D) denoising block leverages spatial and temporal characteristics of pilot signals, considering both real and imaginary components of channel matrices. The proposed CGAN estimator is extensively evaluated against traditional estimators like minimum mean-squared error (MMSE), least squares (LS), convolutional neural network (CNN), CNN-based deep residual learning denoiser (CRLD), and blind estimation. Simulation results show 82% gain of the proposed estimator over CRLD and MMSE estimators at an SNR of 5 dB. Moreover, it has advanced learning capabilities and accurately replicates complex channel characteristics.","PeriodicalId":100641,"journal":{"name":"IEEE Transactions on Machine Learning in Communications and Networking","volume":"2 ","pages":"805-822"},"PeriodicalIF":0.0,"publicationDate":"2024-06-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10555303","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141453439","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
IEEE Transactions on Machine Learning in Communications and Networking
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1