首页 > 最新文献

IEEE Transactions on Machine Learning in Communications and Networking最新文献

英文 中文
A Link-Quality Anomaly Detection Framework for Software-Defined Wireless Mesh Networks 软件定义无线网格网络的链路质量异常检测框架
Pub Date : 2024-04-15 DOI: 10.1109/TMLCN.2024.3388973
Sotiris Skaperas;Lefteris Mamatas;Vassilis Tsaoussidis
Software-defined wireless mesh networks are being increasingly deployed in diverse settings, such as smart cities and public Wi-Fi access infrastructures. The signal propagation and interference issues that typically characterize these environments can be handled by employing SDN controller mechanisms, effectively monitoring link quality and triggering appropriate mitigation strategies, such as adjusting link and/or routing protocols. In this paper, we propose an unsupervised machine learning (ML) online framework for link quality detection consisting of: 1) improved preprocessing clustering algorithm, based on elastic similarity measures, to efficiently characterize wireless links in terms of reliability, and 2) a novel change point (CP) detector for the real-time identification of anomalies in the quality of selected links, which minimizes the overestimation error through the incorporation of a rank-based test and a recursive max-type procedure. In this sense, considering the communication constraints of such environments, our approach minimizes the detection overhead and the inaccurate decisions caused by overestimation. The proposed detector is validated, both on its individual components and as an overall mechanism, against synthetic but also real data traces; the latter being extracted from real wireless mesh network deployments.
软件定义的无线网格网络正越来越多地部署在各种环境中,如智能城市和公共 Wi-Fi 接入基础设施。信号传播和干扰问题是这些环境的典型特征,可通过采用 SDN 控制器机制来处理,有效监控链路质量并触发适当的缓解策略,如调整链路和/或路由协议。在本文中,我们提出了一种用于链路质量检测的无监督机器学习(ML)在线框架,该框架由以下部分组成:1) 基于弹性相似度量的改进型预处理聚类算法,可有效描述无线链路的可靠性特征;以及 2) 用于实时识别选定链路质量异常的新型变化点(CP)检测器,该检测器通过基于等级的测试和递归最大值类型程序将高估误差降至最低。从这个意义上说,考虑到此类环境的通信限制,我们的方法最大限度地减少了检测开销和高估造成的不准确决策。我们对所提出的检测器进行了验证,无论是对其单个组件还是作为一个整体机制,都进行了合成和真实数据追踪;后者是从真实的无线网状网络部署中提取的。
{"title":"A Link-Quality Anomaly Detection Framework for Software-Defined Wireless Mesh Networks","authors":"Sotiris Skaperas;Lefteris Mamatas;Vassilis Tsaoussidis","doi":"10.1109/TMLCN.2024.3388973","DOIUrl":"https://doi.org/10.1109/TMLCN.2024.3388973","url":null,"abstract":"Software-defined wireless mesh networks are being increasingly deployed in diverse settings, such as smart cities and public Wi-Fi access infrastructures. The signal propagation and interference issues that typically characterize these environments can be handled by employing SDN controller mechanisms, effectively monitoring link quality and triggering appropriate mitigation strategies, such as adjusting link and/or routing protocols. In this paper, we propose an unsupervised machine learning (ML) online framework for link quality detection consisting of: 1) improved preprocessing clustering algorithm, based on elastic similarity measures, to efficiently characterize wireless links in terms of reliability, and 2) a novel change point (CP) detector for the real-time identification of anomalies in the quality of selected links, which minimizes the overestimation error through the incorporation of a rank-based test and a recursive max-type procedure. In this sense, considering the communication constraints of such environments, our approach minimizes the detection overhead and the inaccurate decisions caused by overestimation. The proposed detector is validated, both on its individual components and as an overall mechanism, against synthetic but also real data traces; the latter being extracted from real wireless mesh network deployments.","PeriodicalId":100641,"journal":{"name":"IEEE Transactions on Machine Learning in Communications and Networking","volume":"2 ","pages":"495-510"},"PeriodicalIF":0.0,"publicationDate":"2024-04-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10499246","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140639411","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
STTMC: A Few-Shot Spatial Temporal Transductive Modulation Classifier STTMC: 少量时空传导调制分类器
Pub Date : 2024-04-11 DOI: 10.1109/TMLCN.2024.3387430
Yunhao Shi;Hua Xu;Zisen Qi;Yue Zhang;Dan Wang;Lei Jiang
The advancement of deep learning (DL) techniques has led to significant progress in Automatic Modulation Classification (AMC). However, most existing DL-based AMC methods require massive training samples, which are difficult to obtain in non-cooperative scenarios. The identification of modulation types under small sample conditions has become an increasingly urgent problem. In this paper, we present a novel few-shot AMC model named the Spatial Temporal Transductive Modulation Classifier (STTMC), which comprises two modules: a feature extraction module and a graph network module. The former is responsible for extracting diverse features through a spatiotemporal parallel network, while the latter facilitates transductive decision-making through a graph network that uses a closed-form solution. Notably, STTMC classifies a group of test signals simultaneously to increase stability of few-shot model with an episode training strategy. Experimental results on the RadioML.2018.01A and RadioML.2016.10A datasets demonstrate that the proposed method perform well in 3way-Kshot, 5way-Kshot and 10way-Kshot configurations. In particular, STTMC outperforms other existing AMC methods by a large margin.
随着深度学习(DL)技术的发展,自动调制分类(AMC)取得了重大进展。然而,大多数现有的基于深度学习的自动调制分类方法都需要大量的训练样本,而这些样本在非合作场景下很难获得。在小样本条件下识别调制类型已成为一个日益紧迫的问题。在本文中,我们提出了一种名为空间时空传导调制分类器(STTMC)的新型少镜头 AMC 模型,它由两个模块组成:特征提取模块和图网络模块。前者负责通过时空并行网络提取各种特征,后者则通过使用闭式解的图网络促进归纳决策。值得注意的是,STTMC 可同时对一组测试信号进行分类,以提高采用插集训练策略的少拍模型的稳定性。在 RadioML.2018.01A 和 RadioML.2016.10A 数据集上的实验结果表明,所提出的方法在 3way-Kshot、5way-Kshot 和 10way-Kshot 配置中表现良好。特别是,STTMC 在很大程度上优于其他现有的 AMC 方法。
{"title":"STTMC: A Few-Shot Spatial Temporal Transductive Modulation Classifier","authors":"Yunhao Shi;Hua Xu;Zisen Qi;Yue Zhang;Dan Wang;Lei Jiang","doi":"10.1109/TMLCN.2024.3387430","DOIUrl":"https://doi.org/10.1109/TMLCN.2024.3387430","url":null,"abstract":"The advancement of deep learning (DL) techniques has led to significant progress in Automatic Modulation Classification (AMC). However, most existing DL-based AMC methods require massive training samples, which are difficult to obtain in non-cooperative scenarios. The identification of modulation types under small sample conditions has become an increasingly urgent problem. In this paper, we present a novel few-shot AMC model named the Spatial Temporal Transductive Modulation Classifier (STTMC), which comprises two modules: a feature extraction module and a graph network module. The former is responsible for extracting diverse features through a spatiotemporal parallel network, while the latter facilitates transductive decision-making through a graph network that uses a closed-form solution. Notably, STTMC classifies a group of test signals simultaneously to increase stability of few-shot model with an episode training strategy. Experimental results on the RadioML.2018.01A and RadioML.2016.10A datasets demonstrate that the proposed method perform well in 3way-Kshot, 5way-Kshot and 10way-Kshot configurations. In particular, STTMC outperforms other existing AMC methods by a large margin.","PeriodicalId":100641,"journal":{"name":"IEEE Transactions on Machine Learning in Communications and Networking","volume":"2 ","pages":"546-559"},"PeriodicalIF":0.0,"publicationDate":"2024-04-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10497130","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140648000","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Deep Learning Based Induced GNSS Spoof Detection Framework 基于深度学习的诱导式全球导航卫星系统欺骗检测框架
Pub Date : 2024-04-10 DOI: 10.1109/TMLCN.2024.3386649
Asif Iqbal;Muhammad Naveed Aman;Biplab Sikdar
The Global Navigation Satellite System (GNSS) plays a crucial role in critical infrastructure by delivering precise timing and positional data. Nonetheless, the civilian segment of the GNSS remains susceptible to various spoofing attacks, necessitating robust detection mechanisms. The ability to deter such attacks significantly enhances the reliability and security of systems utilizing GNSS technology. Supervised Machine Learning (ML) techniques have shown promise in spoof detection. However, their effectiveness hinges on training data encompassing all possible attack scenarios, rendering them vulnerable to novel attack vectors. To address this limitation, we explore representation learning-based methods. These methods can be trained with a single data class and subsequently applied to classify test samples as either belonging to the training class or not. In this context, we introduce a GNSS spoof detection model comprising a Variational AutoEncoder (VAE) and a Generative Adversarial Network (GAN). The composite model is designed to efficiently learn the class distribution of the training data. The features used for training are extracted from the radio frequency and tracking modules of a standard GNSS receiver. To train our model, we leverage the Texas Spoofing Test Battery (TEXBAT) datasets. Our trained model yields three distinct detectors capable of effectively identifying spoofed signals. The detection performance across simpler to intermediate datasets for these detectors reaches approximately 99%, demonstrating their robustness. In the case of subtle attack scenario represented by DS-7, our approach achieves an approximate detection rate of 95%. In contrast, under supervised learning, the best detection score for DS-7 remains limited to 44.1%.
全球导航卫星系统(GNSS)通过提供精确的定时和定位数据,在关键基础设施中发挥着至关重要的作用。然而,全球导航卫星系统的民用部分仍然容易受到各种欺骗攻击,因此需要强有力的检测机制。遏制此类攻击的能力大大提高了利用 GNSS 技术的系统的可靠性和安全性。有监督的机器学习(ML)技术在欺骗检测方面已显示出前景。然而,这些技术的有效性取决于包含所有可能攻击场景的训练数据,因此容易受到新型攻击载体的攻击。为了解决这一局限性,我们探索了基于表示学习的方法。这些方法可以用单一数据类别进行训练,然后应用于将测试样本分类为属于或不属于训练类别。在此背景下,我们引入了一个由变异自动编码器(VAE)和生成对抗网络(GAN)组成的 GNSS 欺骗检测模型。该复合模型旨在有效学习训练数据的类别分布。用于训练的特征是从标准 GNSS 接收机的射频和跟踪模块中提取的。为了训练我们的模型,我们利用了德克萨斯欺骗测试电池(TEXBAT)数据集。我们训练的模型产生了三种不同的检测器,能够有效识别欺骗信号。这些检测器在简单到中级数据集上的检测性能达到约 99%,证明了它们的鲁棒性。在以 DS-7 为代表的微妙攻击场景中,我们的方法达到了近 95% 的检测率。相比之下,在监督学习下,DS-7 的最佳检测得分仍限制在 44.1%。
{"title":"A Deep Learning Based Induced GNSS Spoof Detection Framework","authors":"Asif Iqbal;Muhammad Naveed Aman;Biplab Sikdar","doi":"10.1109/TMLCN.2024.3386649","DOIUrl":"https://doi.org/10.1109/TMLCN.2024.3386649","url":null,"abstract":"The Global Navigation Satellite System (GNSS) plays a crucial role in critical infrastructure by delivering precise timing and positional data. Nonetheless, the civilian segment of the GNSS remains susceptible to various spoofing attacks, necessitating robust detection mechanisms. The ability to deter such attacks significantly enhances the reliability and security of systems utilizing GNSS technology. Supervised Machine Learning (ML) techniques have shown promise in spoof detection. However, their effectiveness hinges on training data encompassing all possible attack scenarios, rendering them vulnerable to novel attack vectors. To address this limitation, we explore representation learning-based methods. These methods can be trained with a single data class and subsequently applied to classify test samples as either belonging to the training class or not. In this context, we introduce a GNSS spoof detection model comprising a Variational AutoEncoder (VAE) and a Generative Adversarial Network (GAN). The composite model is designed to efficiently learn the class distribution of the training data. The features used for training are extracted from the radio frequency and tracking modules of a standard GNSS receiver. To train our model, we leverage the Texas Spoofing Test Battery (TEXBAT) datasets. Our trained model yields three distinct detectors capable of effectively identifying spoofed signals. The detection performance across simpler to intermediate datasets for these detectors reaches approximately 99%, demonstrating their robustness. In the case of subtle attack scenario represented by DS-7, our approach achieves an approximate detection rate of 95%. In contrast, under supervised learning, the best detection score for DS-7 remains limited to 44.1%.","PeriodicalId":100641,"journal":{"name":"IEEE Transactions on Machine Learning in Communications and Networking","volume":"2 ","pages":"457-478"},"PeriodicalIF":0.0,"publicationDate":"2024-04-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10495074","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140633565","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Fast Context Adaptation in Cost-Aware Continual Learning 成本意识持续学习中的快速情境适应
Pub Date : 2024-04-09 DOI: 10.1109/TMLCN.2024.3386647
Seyyidahmed Lahmer;Federico Mason;Federico Chiariotti;Andrea Zanella
In the past few years, Deep Reinforcement Learning (DRL) has become a valuable solution to automatically learn efficient resource management strategies in complex networks with time-varying statistics. However, the increased complexity of 5G and Beyond networks requires correspondingly more complex learning agents and the learning process itself might end up competing with users for communication and computational resources. This creates friction: on the one hand, the learning process needs resources to quickly converge to an effective strategy; on the other hand, the learning process needs to be efficient, i.e., take as few resources as possible from the user’s data plane, so as not to throttle users’ Quality of Service (QoS). In this paper, we investigate this trade-off, which we refer to as cost of learning, and propose a dynamic strategy to balance the resources assigned to the data plane and those reserved for learning. With the proposed approach, a learning agent can quickly converge to an efficient resource allocation strategy and adapt to changes in the environment as for the Continual Learning (CL) paradigm, while minimizing the impact on the users’ QoS. Simulation results show that the proposed method outperforms static allocation methods with minimal learning overhead, almost reaching the performance of an ideal out-of-band CL solution.
在过去几年中,深度强化学习(DRL)已成为在具有时变统计量的复杂网络中自动学习高效资源管理策略的重要解决方案。然而,5G 及更高网络复杂性的增加需要相应更复杂的学习代理,而学习过程本身最终可能会与用户争夺通信和计算资源。这就产生了摩擦:一方面,学习过程需要资源来快速收敛到有效的策略;另一方面,学习过程需要高效,即尽可能少地占用用户数据平面的资源,以免影响用户的服务质量(QoS)。在本文中,我们研究了这种权衡(我们称之为学习成本),并提出了一种动态策略来平衡分配给数据平面的资源和预留给学习的资源。利用所提出的方法,学习代理可以快速收敛到高效的资源分配策略,并适应环境的变化,就像持续学习(CL)范式一样,同时最大限度地减少对用户 QoS 的影响。仿真结果表明,所提出的方法以最小的学习开销超越了静态分配方法,几乎达到了理想的带外 CL 解决方案的性能。
{"title":"Fast Context Adaptation in Cost-Aware Continual Learning","authors":"Seyyidahmed Lahmer;Federico Mason;Federico Chiariotti;Andrea Zanella","doi":"10.1109/TMLCN.2024.3386647","DOIUrl":"https://doi.org/10.1109/TMLCN.2024.3386647","url":null,"abstract":"In the past few years, Deep Reinforcement Learning (DRL) has become a valuable solution to automatically learn efficient resource management strategies in complex networks with time-varying statistics. However, the increased complexity of 5G and Beyond networks requires correspondingly more complex learning agents and the learning process itself might end up competing with users for communication and computational resources. This creates friction: on the one hand, the learning process needs resources to quickly converge to an effective strategy; on the other hand, the learning process needs to be efficient, i.e., take as few resources as possible from the user’s data plane, so as not to throttle users’ Quality of Service (QoS). In this paper, we investigate this trade-off, which we refer to as cost of learning, and propose a dynamic strategy to balance the resources assigned to the data plane and those reserved for learning. With the proposed approach, a learning agent can quickly converge to an efficient resource allocation strategy and adapt to changes in the environment as for the Continual Learning (CL) paradigm, while minimizing the impact on the users’ QoS. Simulation results show that the proposed method outperforms static allocation methods with minimal learning overhead, almost reaching the performance of an ideal out-of-band CL solution.","PeriodicalId":100641,"journal":{"name":"IEEE Transactions on Machine Learning in Communications and Networking","volume":"2 ","pages":"479-494"},"PeriodicalIF":0.0,"publicationDate":"2024-04-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10495063","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140633566","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Deep Reinforcement Learning-Based Robust Design for an IRS-Assisted MISO-NOMA System 基于深度强化学习的 IRS 辅助 MISO-NOMA 系统鲁棒设计
Pub Date : 2024-04-08 DOI: 10.1109/TMLCN.2024.3385748
Abdulhamed Waraiet;Kanapathippillai Cumanan;Zhiguo Ding;Octavia A. Dobre
In this paper, we propose a robust design for an intelligent reflecting surface (IRS)-assisted multiple-input single output non-orthogonal multiple access (NOMA) system. By considering channel uncertainties, the original robust design problem is formulated as a sum-rate maximization problem under a set of constraints. In particular, the uncertainties associated with reflected channels through IRS elements and direct channels are taken into account in the design and they are modelled as bounded errors. However, the original robust problem is not jointly convex in terms of beamformers at the base station and phase shifts of IRS elements. Therefore, we reformulate the original robust design as a reinforcement learning problem and develop an algorithm based on the twin-delayed deep deterministic policy gradient agent (also known as TD3). In particular, the proposed algorithm solves the original problem by jointly designing the beamformers and the phase shifts, which is not possible with conventional optimization techniques. Numerical results are provided to validate the effectiveness and evaluate the performance of the proposed robust design. In particular, the results demonstrate the competitive and promising capabilities of the proposed robust algorithm, which achieves significant gains in terms of robustness and system sum-rates over the baseline deep deterministic policy gradient agent. In addition, the algorithm has the ability to deal with fixed and dynamic channels, which gives deep reinforcement learning methods an edge over hand-crafted convex optimization-based algorithms.
本文提出了智能反射面(IRS)辅助多输入单输出非正交多址(NOMA)系统的稳健设计方案。考虑到信道的不确定性,最初的鲁棒设计问题被表述为一组约束条件下的总速率最大化问题。特别是,设计中考虑了与通过 IRS 元件的反射信道和直接信道相关的不确定性,并将其模拟为有界误差。然而,就基站的波束成形器和 IRS 信元的相移而言,原始的稳健问题并不是联合凸问题。因此,我们将原始鲁棒设计重新表述为强化学习问题,并开发了一种基于双延迟深度确定性策略梯度代理(也称为 TD3)的算法。特别是,所提出的算法通过联合设计波束成形器和相移来解决原始问题,这是传统优化技术无法实现的。本文提供了数值结果,以验证所提稳健设计的有效性并评估其性能。特别是,结果表明了所提出的鲁棒算法的竞争力和前景,与基线深度确定性策略梯度代理相比,该算法在鲁棒性和系统总和率方面取得了显著提高。此外,该算法还能处理固定和动态信道,这使得深度强化学习方法比基于凸优化的手工算法更具优势。
{"title":"Deep Reinforcement Learning-Based Robust Design for an IRS-Assisted MISO-NOMA System","authors":"Abdulhamed Waraiet;Kanapathippillai Cumanan;Zhiguo Ding;Octavia A. Dobre","doi":"10.1109/TMLCN.2024.3385748","DOIUrl":"https://doi.org/10.1109/TMLCN.2024.3385748","url":null,"abstract":"In this paper, we propose a robust design for an intelligent reflecting surface (IRS)-assisted multiple-input single output non-orthogonal multiple access (NOMA) system. By considering channel uncertainties, the original robust design problem is formulated as a sum-rate maximization problem under a set of constraints. In particular, the uncertainties associated with reflected channels through IRS elements and direct channels are taken into account in the design and they are modelled as bounded errors. However, the original robust problem is not jointly convex in terms of beamformers at the base station and phase shifts of IRS elements. Therefore, we reformulate the original robust design as a reinforcement learning problem and develop an algorithm based on the twin-delayed deep deterministic policy gradient agent (also known as TD3). In particular, the proposed algorithm solves the original problem by jointly designing the beamformers and the phase shifts, which is not possible with conventional optimization techniques. Numerical results are provided to validate the effectiveness and evaluate the performance of the proposed robust design. In particular, the results demonstrate the competitive and promising capabilities of the proposed robust algorithm, which achieves significant gains in terms of robustness and system sum-rates over the baseline deep deterministic policy gradient agent. In addition, the algorithm has the ability to deal with fixed and dynamic channels, which gives deep reinforcement learning methods an edge over hand-crafted convex optimization-based algorithms.","PeriodicalId":100641,"journal":{"name":"IEEE Transactions on Machine Learning in Communications and Networking","volume":"2 ","pages":"424-441"},"PeriodicalIF":0.0,"publicationDate":"2024-04-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10494408","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140633557","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A DDPG-Based Zero-Touch Dynamic Prioritization to Address Starvation of Services for Deploying Microservices-Based VNFs 基于 DDPG 的零接触动态优先级排序,解决基于微服务的 VNF 部署中的服务饥饿问题
Pub Date : 2024-04-08 DOI: 10.1109/TMLCN.2024.3386152
Swarna B. Chetty;Hamed Ahmadi;Avishek Nag
The sixth generation of mobile networks (6G) promises applications and services with faster data rates, ultra-reliability, and lower latency compared to the fifth-generation mobile networks (5G). These highly demanding 6G applications will burden the network by imposing stringent performance requirements. Network Function Virtualization (NFV) reduces costs by running network functions as Virtual Network Functions (VNFs) on commodity hardware. While NFV is a promising solution, it poses Resource Allocation (RA) challenges. To enhance RA efficiency, we addressed two critical subproblems: the requirement of dynamic service priority and a low-priority service starvation problem. We introduce ‘Dynamic Prioritization’ (DyPr), employing an ML model to emphasize macro- and microlevel priority for unseen services and address the existing starvation problem in current solutions and their limitations. We present ‘Adaptive Scheduling’ (AdSch), a three-factor approach (priority, threshold waiting time, and reliability) that surpasses traditional priority-based methods. In this context, starvation refers to extended waiting times and the eventual rejection of low-priority services due to a ‘delay. Also, to further investigate, a traffic-aware starvation and deployment problem is studied to enhance efficiency. We employed a Deep Deterministic Policy Gradient (DDPG) model for adaptive scheduling and an online Ridge Regression (RR) model for dynamic prioritization, creating a zero-touch solution. The DDPG model efficiently identified ‘Beneficial and Starving’ services, alleviating the starvation issue by deploying twice as many low-priority services. With an accuracy rate exceeding 80%, our online RR model quickly learns prioritization patterns in under 100 transitions. We categorized services as ‘High-Demand’ (HD) or ‘Not So High Demand’ (NHD) based on traffic volume, providing insight into high revenue-generating services. We achieved a nearly optimal resource allocation by balancing low-priority HD and low-priority NHD services, deploying twice as many low-priority HD services as a model without traffic awareness.
与第五代移动网络(5G)相比,第六代移动网络(6G)承诺提供更快的数据传输速率、超高的可靠性和更低的延迟的应用和服务。这些高要求的 6G 应用将对网络提出严格的性能要求,从而加重网络负担。网络功能虚拟化(NFV)通过在商品硬件上以虚拟网络功能(VNF)的形式运行网络功能来降低成本。虽然 NFV 是一种前景广阔的解决方案,但它也带来了资源分配(RA)方面的挑战。为了提高资源分配效率,我们解决了两个关键的子问题:动态服务优先级要求和低优先级服务饥饿问题。我们引入了 "动态优先级"(DyPr),采用 ML 模型来强调未见服务的宏观和微观优先级,并解决目前解决方案中存在的饥饿问题及其局限性。我们提出了 "自适应调度"(AdSch),这是一种三因素方法(优先级、阈值等待时间和可靠性),超越了传统的基于优先级的方法。在这里,"饥饿 "指的是等待时间延长,以及由于 "延迟 "而最终拒绝低优先级服务。此外,为了进一步研究,我们还研究了流量感知的饥饿和部署问题,以提高效率。我们采用了用于自适应调度的深度确定性策略梯度(DDPG)模型和用于动态优先级排序的在线岭回归(RR)模型,创建了一个零接触解决方案。DDPG 模型能有效识别 "受益和饥饿 "服务,通过部署两倍的低优先级服务来缓解饥饿问题。我们的在线 RR 模型准确率超过 80%,可在 100 次转换中快速学习优先级模式。我们根据流量将服务分为 "高需求"(HD)和 "非高需求"(NHD),以便深入了解高创收服务。通过平衡低优先级 HD 服务和低优先级 NHD 服务,我们实现了近乎最优的资源分配,部署的低优先级 HD 服务数量是无流量感知模型的两倍。
{"title":"A DDPG-Based Zero-Touch Dynamic Prioritization to Address Starvation of Services for Deploying Microservices-Based VNFs","authors":"Swarna B. Chetty;Hamed Ahmadi;Avishek Nag","doi":"10.1109/TMLCN.2024.3386152","DOIUrl":"https://doi.org/10.1109/TMLCN.2024.3386152","url":null,"abstract":"The sixth generation of mobile networks (6G) promises applications and services with faster data rates, ultra-reliability, and lower latency compared to the fifth-generation mobile networks (5G). These highly demanding 6G applications will burden the network by imposing stringent performance requirements. Network Function Virtualization (NFV) reduces costs by running network functions as Virtual Network Functions (VNFs) on commodity hardware. While NFV is a promising solution, it poses Resource Allocation (RA) challenges. To enhance RA efficiency, we addressed two critical subproblems: the requirement of dynamic service priority and a low-priority service starvation problem. We introduce ‘Dynamic Prioritization’ (DyPr), employing an ML model to emphasize macro- and microlevel priority for unseen services and address the existing starvation problem in current solutions and their limitations. We present ‘Adaptive Scheduling’ (AdSch), a three-factor approach (priority, threshold waiting time, and reliability) that surpasses traditional priority-based methods. In this context, starvation refers to extended waiting times and the eventual rejection of low-priority services due to a ‘delay. Also, to further investigate, a traffic-aware starvation and deployment problem is studied to enhance efficiency. We employed a Deep Deterministic Policy Gradient (DDPG) model for adaptive scheduling and an online Ridge Regression (RR) model for dynamic prioritization, creating a zero-touch solution. The DDPG model efficiently identified ‘Beneficial and Starving’ services, alleviating the starvation issue by deploying twice as many low-priority services. With an accuracy rate exceeding 80%, our online RR model quickly learns prioritization patterns in under 100 transitions. We categorized services as ‘High-Demand’ (HD) or ‘Not So High Demand’ (NHD) based on traffic volume, providing insight into high revenue-generating services. We achieved a nearly optimal resource allocation by balancing low-priority HD and low-priority NHD services, deploying twice as many low-priority HD services as a model without traffic awareness.","PeriodicalId":100641,"journal":{"name":"IEEE Transactions on Machine Learning in Communications and Networking","volume":"2 ","pages":"526-545"},"PeriodicalIF":0.0,"publicationDate":"2024-04-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10494765","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140647819","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Hierarchically Federated Learning in Wireless Networks: D2D Consensus and Inter-Cell Aggregation 无线网络中的分层联合学习:D2D 共识和小区间聚合
Pub Date : 2024-04-04 DOI: 10.1109/TMLCN.2024.3385355
Jie Zhang;Li Chen;Yunfei Chen;Xiaohui Chen;Guo Wei
Decentralized federated learning (DFL) architecture enables clients to collaboratively train a shared machine learning model without a central parameter server. However, it is difficult to apply DFL to a multi-cell scenario due to inadequate model averaging and cross-cell device-to-device (D2D) communications. In this paper, we propose a hierarchically decentralized federated learning (HDFL) framework that combines intra-cell D2D links between devices and backhaul communications between base stations. In HDFL, devices from different cells collaboratively train a global model using periodic intra-cell D2D consensus and inter-cell aggregation. The strong convergence guarantee of the proposed HDFL algorithm is established even for non-convex objectives. Based on the convergence analysis, we characterize the network topology of each cell, the communication interval of intra-cell consensus and inter-cell aggregation on the training performance. To further improve the performance of HDFL, we optimize the computation capacity selection and bandwidth allocation to minimize the training latency and energy overhead. Numerical results based on the MNIST and CIFAR-10 datasets validate the superiority of HDFL over traditional DFL methods in the multi-cell scenario.
分散联合学习(DFL)架构使客户能够在没有中央参数服务器的情况下协作训练共享机器学习模型。然而,由于模型平均化和跨小区设备到设备(D2D)通信不足,DFL 难以应用于多小区场景。在本文中,我们提出了一种分层分散联合学习(HDFL)框架,它结合了设备间的小区内 D2D 链接和基站间的回程通信。在 HDFL 中,来自不同小区的设备利用周期性的小区内 D2D 共识和小区间聚合协作训练一个全局模型。即使对于非凸目标,所提出的 HDFL 算法也有很强的收敛性保证。基于收敛性分析,我们描述了每个小区的网络拓扑结构、小区内共识和小区间聚合的通信间隔对训练性能的影响。为了进一步提高 HDFL 的性能,我们优化了计算容量选择和带宽分配,以尽量减少训练延迟和能量开销。基于 MNIST 和 CIFAR-10 数据集的数值结果验证了 HDFL 在多小区场景下优于传统的 DFL 方法。
{"title":"Hierarchically Federated Learning in Wireless Networks: D2D Consensus and Inter-Cell Aggregation","authors":"Jie Zhang;Li Chen;Yunfei Chen;Xiaohui Chen;Guo Wei","doi":"10.1109/TMLCN.2024.3385355","DOIUrl":"https://doi.org/10.1109/TMLCN.2024.3385355","url":null,"abstract":"Decentralized federated learning (DFL) architecture enables clients to collaboratively train a shared machine learning model without a central parameter server. However, it is difficult to apply DFL to a multi-cell scenario due to inadequate model averaging and cross-cell device-to-device (D2D) communications. In this paper, we propose a hierarchically decentralized federated learning (HDFL) framework that combines intra-cell D2D links between devices and backhaul communications between base stations. In HDFL, devices from different cells collaboratively train a global model using periodic intra-cell D2D consensus and inter-cell aggregation. The strong convergence guarantee of the proposed HDFL algorithm is established even for non-convex objectives. Based on the convergence analysis, we characterize the network topology of each cell, the communication interval of intra-cell consensus and inter-cell aggregation on the training performance. To further improve the performance of HDFL, we optimize the computation capacity selection and bandwidth allocation to minimize the training latency and energy overhead. Numerical results based on the MNIST and CIFAR-10 datasets validate the superiority of HDFL over traditional DFL methods in the multi-cell scenario.","PeriodicalId":100641,"journal":{"name":"IEEE Transactions on Machine Learning in Communications and Networking","volume":"2 ","pages":"442-456"},"PeriodicalIF":0.0,"publicationDate":"2024-04-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10491307","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140633558","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Transfer Learning With Reconstruction Loss 有重建损失的迁移学习
Pub Date : 2024-04-02 DOI: 10.1109/TMLCN.2024.3384329
Wei Cui;Wei Yu
In most applications of utilizing neural networks for mathematical optimization, a dedicated model is trained for each specific optimization objective. However, in many scenarios, several distinct yet correlated objectives or tasks often need to be optimized on the same set of problem inputs. Instead of independently training a different neural network for each problem separately, it would be more efficient to exploit the correlations between these objectives and to train multiple neural network models with shared model parameters and feature representations. To achieve this, this paper first establishes the concept of common information: the shared knowledge required for solving the correlated tasks, then proposes a novel approach for model training by adding into the model an additional reconstruction stage associated with a new reconstruction loss. This loss is for reconstructing the common information starting from a selected hidden layer in the model. The proposed approach encourages the learned features to be general and transferable, and therefore can be readily used for efficient transfer learning. For numerical simulations, three applications are studied: transfer learning on classifying MNIST handwritten digits, the device-to-device wireless network power allocation, and the multiple-input-single-output network downlink beamforming and localization. Simulation results suggest that the proposed approach is highly efficient in data and model complexity, is resilient to over-fitting, and has competitive performances.
在利用神经网络进行数学优化的大多数应用中,每个特定的优化目标都要训练一个专门的模型。然而,在许多情况下,往往需要在同一组问题输入上优化多个不同但相互关联的目标或任务。与其为每个问题分别独立地训练不同的神经网络,不如利用这些目标之间的相关性,训练具有共享模型参数和特征表示的多个神经网络模型,这样会更有效。为此,本文首先建立了共同信息的概念:解决相关任务所需的共享知识,然后提出了一种新的模型训练方法,即在模型中加入一个与新的重建损失相关的额外重建阶段。该损失用于从模型中选定的隐藏层开始重建共同信息。所提出的方法促使学习到的特征具有通用性和可迁移性,因此可随时用于高效的迁移学习。在数值模拟方面,研究了三个应用:MNIST 手写数字分类的迁移学习、设备到设备无线网络功率分配以及多输入单输出网络下行波束成形和定位。仿真结果表明,所提出的方法在数据和模型复杂度方面具有很高的效率,能够抵御过度拟合,并具有极具竞争力的性能。
{"title":"Transfer Learning With Reconstruction Loss","authors":"Wei Cui;Wei Yu","doi":"10.1109/TMLCN.2024.3384329","DOIUrl":"https://doi.org/10.1109/TMLCN.2024.3384329","url":null,"abstract":"In most applications of utilizing neural networks for mathematical optimization, a dedicated model is trained for each specific optimization objective. However, in many scenarios, several distinct yet correlated objectives or tasks often need to be optimized on the same set of problem inputs. Instead of independently training a different neural network for each problem separately, it would be more efficient to exploit the correlations between these objectives and to train multiple neural network models with shared model parameters and feature representations. To achieve this, this paper first establishes the concept of common information: the shared knowledge required for solving the correlated tasks, then proposes a novel approach for model training by adding into the model an additional reconstruction stage associated with a new reconstruction loss. This loss is for reconstructing the common information starting from a selected hidden layer in the model. The proposed approach encourages the learned features to be general and transferable, and therefore can be readily used for efficient transfer learning. For numerical simulations, three applications are studied: transfer learning on classifying MNIST handwritten digits, the device-to-device wireless network power allocation, and the multiple-input-single-output network downlink beamforming and localization. Simulation results suggest that the proposed approach is highly efficient in data and model complexity, is resilient to over-fitting, and has competitive performances.","PeriodicalId":100641,"journal":{"name":"IEEE Transactions on Machine Learning in Communications and Networking","volume":"2 ","pages":"407-423"},"PeriodicalIF":0.0,"publicationDate":"2024-04-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10488445","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140633559","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Feature-Based Federated Transfer Learning: Communication Efficiency, Robustness and Privacy 基于特征的联合转移学习:通信效率、鲁棒性和隐私
Pub Date : 2024-03-31 DOI: 10.1109/TMLCN.2024.3408131
Feng Wang;M. Cenk Gursoy;Senem Velipasalar
In this paper, we propose feature-based federated transfer learning as a novel approach to improve communication efficiency by reducing the uplink payload by multiple orders of magnitude compared to that of existing approaches in federated learning and federated transfer learning. Specifically, in the proposed feature-based federated learning, we design the extracted features and outputs to be uploaded instead of parameter updates. For this distributed learning model, we determine the required payload and provide comparisons with the existing schemes. Subsequently, we analyze the robustness of feature-based federated transfer learning against packet loss, data insufficiency, and quantization. Finally, we address privacy considerations by defining and analyzing label privacy leakage and feature privacy leakage, and investigating mitigating approaches. For all aforementioned analyses, we evaluate the performance of the proposed learning scheme via experiments on an image classification task and a natural language processing task to demonstrate its effectiveness (https://github.com/wfwf10/Feature-based-Federated-Transfer-Learning).
在本文中,我们提出了基于特征的联合传输学习,作为一种提高通信效率的新方法,与联合学习和联合传输学习中的现有方法相比,它能将上行链路有效载荷减少多个数量级。具体来说,在所提出的基于特征的联合学习中,我们设计将提取的特征和输出上传,而不是参数更新。对于这种分布式学习模型,我们确定了所需的有效载荷,并提供了与现有方案的比较。随后,我们分析了基于特征的联合传输学习对丢包、数据不足和量化的鲁棒性。最后,我们通过定义和分析标签隐私泄露和特征隐私泄露,并研究缓解方法,来解决隐私问题。对于上述所有分析,我们通过图像分类任务和自然语言处理任务的实验来评估所提出的学习方案的性能,以证明其有效性 (https://github.com/wfwf10/Feature-based-Federated-Transfer-Learning)。
{"title":"Feature-Based Federated Transfer Learning: Communication Efficiency, Robustness and Privacy","authors":"Feng Wang;M. Cenk Gursoy;Senem Velipasalar","doi":"10.1109/TMLCN.2024.3408131","DOIUrl":"https://doi.org/10.1109/TMLCN.2024.3408131","url":null,"abstract":"In this paper, we propose feature-based federated transfer learning as a novel approach to improve communication efficiency by reducing the uplink payload by multiple orders of magnitude compared to that of existing approaches in federated learning and federated transfer learning. Specifically, in the proposed feature-based federated learning, we design the extracted features and outputs to be uploaded instead of parameter updates. For this distributed learning model, we determine the required payload and provide comparisons with the existing schemes. Subsequently, we analyze the robustness of feature-based federated transfer learning against packet loss, data insufficiency, and quantization. Finally, we address privacy considerations by defining and analyzing label privacy leakage and feature privacy leakage, and investigating mitigating approaches. For all aforementioned analyses, we evaluate the performance of the proposed learning scheme via experiments on an image classification task and a natural language processing task to demonstrate its effectiveness (\u0000<uri>https://github.com/wfwf10/Feature-based-Federated-Transfer-Learning</uri>\u0000).","PeriodicalId":100641,"journal":{"name":"IEEE Transactions on Machine Learning in Communications and Networking","volume":"2 ","pages":"823-840"},"PeriodicalIF":0.0,"publicationDate":"2024-03-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10542971","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141453361","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Optimal Access Point Centric Clustering for Cell-Free Massive MIMO Using Gaussian Mixture Model Clustering 利用高斯混杂模型聚类实现无小区大规模多输入多输出(MIMO)的最佳接入点中心聚类
Pub Date : 2024-03-21 DOI: 10.1109/TMLCN.2024.3403789
Pialy Biswas;Ranjan K. Mallik;Khaled B. Letaief
This paper proposes a Gaussian mixture model (GMM) based access point (AP) clustering technique in cell-free massive MIMO (CFMM) communication systems. The APs are first clustered on the basis of large-scale fading coefficients, and the users are assigned to each cluster depending on the channel gain. As the number of clusters increases, there is a degradation in the overall data rate of the system, causing a trade-off between the cluster number and average rate per user. To address this problem, we present an optimization problem that optimizes both the upper bound on the average downlink rate per user and the number of clusters. The optimal number of clusters is intuitively determined by solving the optimization problem, and then grouping the APs and users. As a result, the computation expense is much lower than the current techniques, since the existing methods require evaluations of the network performance in multiple iterations to find the optimal number of clusters. In addition, we analyze the performance of both balanced and unbalanced clustering. Numerical results will indicate that the unbalanced clustering yields a superior rate per user while maintaining a lower level of complexity compared to the balanced one. Furthermore, we investigate the statistical analysis of the spectral efficiency (SE) per user in the clustered CFMM. The findings reveal that the SE per user can be approximated by the logistic distribution.
本文在无小区大规模多输入多输出(CFMM)通信系统中提出了一种基于高斯混合模型(GMM)的接入点(AP)聚类技术。首先根据大规模衰减系数对接入点进行聚类,然后根据信道增益将用户分配到每个聚类中。随着簇数的增加,系统的整体数据传输速率会下降,从而导致簇数和每个用户平均速率之间的权衡。为了解决这个问题,我们提出了一个优化问题,既能优化每个用户的平均下行链路速率上限,又能优化簇的数量。通过求解优化问题,然后对接入点和用户进行分组,就能直观地确定最佳簇数。因此,与现有技术相比,计算费用要低得多,因为现有方法需要通过多次迭代来评估网络性能,从而找到最佳簇数。此外,我们还分析了平衡聚类和非平衡聚类的性能。数值结果表明,与平衡聚类法相比,非平衡聚类法在保持较低复杂度的同时,还能获得更高的单位用户速率。此外,我们还对聚类 CFMM 中每个用户的频谱效率(SE)进行了统计分析。研究结果表明,每个用户的频谱效率可以用逻辑分布来近似。
{"title":"Optimal Access Point Centric Clustering for Cell-Free Massive MIMO Using Gaussian Mixture Model Clustering","authors":"Pialy Biswas;Ranjan K. Mallik;Khaled B. Letaief","doi":"10.1109/TMLCN.2024.3403789","DOIUrl":"https://doi.org/10.1109/TMLCN.2024.3403789","url":null,"abstract":"This paper proposes a Gaussian mixture model (GMM) based access point (AP) clustering technique in cell-free massive MIMO (CFMM) communication systems. The APs are first clustered on the basis of large-scale fading coefficients, and the users are assigned to each cluster depending on the channel gain. As the number of clusters increases, there is a degradation in the overall data rate of the system, causing a trade-off between the cluster number and average rate per user. To address this problem, we present an optimization problem that optimizes both the upper bound on the average downlink rate per user and the number of clusters. The optimal number of clusters is intuitively determined by solving the optimization problem, and then grouping the APs and users. As a result, the computation expense is much lower than the current techniques, since the existing methods require evaluations of the network performance in multiple iterations to find the optimal number of clusters. In addition, we analyze the performance of both balanced and unbalanced clustering. Numerical results will indicate that the unbalanced clustering yields a superior rate per user while maintaining a lower level of complexity compared to the balanced one. Furthermore, we investigate the statistical analysis of the spectral efficiency (SE) per user in the clustered CFMM. The findings reveal that the SE per user can be approximated by the logistic distribution.","PeriodicalId":100641,"journal":{"name":"IEEE Transactions on Machine Learning in Communications and Networking","volume":"2 ","pages":"675-687"},"PeriodicalIF":0.0,"publicationDate":"2024-03-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10535986","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141187291","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
IEEE Transactions on Machine Learning in Communications and Networking
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1