首页 > 最新文献

IEEE/ACM Transactions on Networking最新文献

英文 中文
Diagnosing End-Host Network Bottlenecks in RDMA Servers 诊断 RDMA 服务器的终端主机网络瓶颈
IF 3 3区 计算机科学 Q2 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2024-07-16 DOI: 10.1109/TNET.2024.3416419
Kefei Liu;Jiao Zhang;Zhuo Jiang;Haoran Wei;Xiaolong Zhong;Lizhuang Tan;Tian Pan;Tao Huang
In RDMA (Remote Direct Memory Access) networks, end-host networks, including intra-host networks and RNICs (RDMA NIC), were considered robust and have received little attention. However, as the RNIC line rate rapidly increases to multi-hundred gigabits, the intra-host network becomes a potential performance bottleneck for network applications. Intra-host network bottlenecks can result in degraded intra-host bandwidth and increased intra-host latency. In addition, RNIC network problems can result in connection failures and packet drops. Host network problems can severely degrade network performance. However, when host network problems occur, they can hardly be noticed due to the lack of a monitoring system. Furthermore, existing diagnostic mechanisms cannot efficiently diagnose host network problems. In this paper, we analyze the symptom of host network problems based on our long-term troubleshooting experience and propose Hostping, the first monitoring and diagnostic system dedicated to host networks. The core idea of Hostping is to conduct 1) loopback tests between RNICs and endpoints within the host to measure intra-host latency and bandwidth, and 2) mutual probing between RNICs on a host to measure RNIC connectivity. We have deployed Hostping on thousands of servers in our distributed machine learning system. Not only can Hostping detect and diagnose host network problems we already knew in minutes, but it also reveals eight problems we did not notice before.
在 RDMA(远程直接内存访问)网络中,包括主机内网络和 RNIC(RDMA 网卡)在内的终端主机网络被认为是稳健的,很少受到关注。然而,随着 RNIC 线路速率迅速增至数百千兆比特,主机内网络成为网络应用的潜在性能瓶颈。主机内网络瓶颈会导致主机内带宽下降和主机内延迟增加。此外,RNIC 网络问题还可能导致连接失败和数据包丢失。主机网络问题会严重降低网络性能。然而,当主机网络出现问题时,由于缺乏监控系统,这些问题很难被察觉。此外,现有的诊断机制也无法有效诊断主机网络问题。在本文中,我们根据长期的故障诊断经验,分析了主机网络问题的症状,并提出了首个专用于主机网络的监控和诊断系统 Hostping。Hostping的核心思想是:1)在主机内的RNIC和端点之间进行环回测试,以测量主机内的延迟和带宽;2)在主机上的RNIC之间进行相互探测,以测量RNIC的连接性。我们已经在分布式机器学习系统的数千台服务器上部署了Hostping。Hostping 不仅能在几分钟内检测和诊断出我们已经知道的主机网络问题,而且还能发现我们以前没有注意到的八个问题。
{"title":"Diagnosing End-Host Network Bottlenecks in RDMA Servers","authors":"Kefei Liu;Jiao Zhang;Zhuo Jiang;Haoran Wei;Xiaolong Zhong;Lizhuang Tan;Tian Pan;Tao Huang","doi":"10.1109/TNET.2024.3416419","DOIUrl":"10.1109/TNET.2024.3416419","url":null,"abstract":"In RDMA (Remote Direct Memory Access) networks, end-host networks, including intra-host networks and RNICs (RDMA NIC), were considered robust and have received little attention. However, as the RNIC line rate rapidly increases to multi-hundred gigabits, the intra-host network becomes a potential performance bottleneck for network applications. Intra-host network bottlenecks can result in degraded intra-host bandwidth and increased intra-host latency. In addition, RNIC network problems can result in connection failures and packet drops. Host network problems can severely degrade network performance. However, when host network problems occur, they can hardly be noticed due to the lack of a monitoring system. Furthermore, existing diagnostic mechanisms cannot efficiently diagnose host network problems. In this paper, we analyze the symptom of host network problems based on our long-term troubleshooting experience and propose Hostping, the first monitoring and diagnostic system dedicated to host networks. The core idea of Hostping is to conduct 1) loopback tests between RNICs and endpoints within the host to measure intra-host latency and bandwidth, and 2) mutual probing between RNICs on a host to measure RNIC connectivity. We have deployed Hostping on thousands of servers in our distributed machine learning system. Not only can Hostping detect and diagnose host network problems we already knew in minutes, but it also reveals eight problems we did not notice before.","PeriodicalId":13443,"journal":{"name":"IEEE/ACM Transactions on Networking","volume":"32 5","pages":"4302-4316"},"PeriodicalIF":3.0,"publicationDate":"2024-07-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141720149","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Deep Distributional Reinforcement Learning-Based Adaptive Routing With Guaranteed Delay Bounds 基于深度分布强化学习的自适应路由与保证延迟边界
IF 3 3区 计算机科学 Q2 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2024-07-15 DOI: 10.1109/TNET.2024.3425652
Jianmin Liu;Dan Li;Yongjun Xu
Real-time applications that require timely data delivery over wireless multi-hop networks within specified deadlines are growing increasingly. Effective routing protocols that can guarantee real-time QoS are crucial, yet challenging, due to the unpredictable variations in end-to-end delay caused by unreliable wireless channels. In such conditions, the upper bound on the end-to-end delay, i.e., worst-case end-to-end delay, should be guaranteed within the deadline. However, existing routing protocols with guaranteed delay bounds cannot strictly guarantee real-time QoS because they assume that the worst-case end-to-end delay is known and ignore the impact of routing policies on the worst-case end-to-end delay determination. In this paper, we relax this assumption and propose DDRL-ARGB, an Adaptive Routing with Guaranteed delay Bounds using Deep Distributional Reinforcement Learning (DDRL). DDRL-ARGB adopts DDRL to jointly determine the worst-case end-to-end delay and learn routing policies. To accurately determine worst-case end-to-end delay, DDRL-ARGB employs a quantile regression deep Q-network to learn the end-to-end delay cumulative distribution. To guarantee real-time QoS, DDRL-ARGB optimizes routing decisions under the constraint of worst-case end-to-end delay within the deadline. To improve traffic congestion, DDRL-ARGB considers the network congestion status when making routing decisions. Extensive results show that DDRL-ARGB can accurately calculate worst-case end-to-end delay, and can strictly guarantee real-time QoS under a small tolerant violation probability against two state-of-the-art routing protocols.
{"title":"Deep Distributional Reinforcement Learning-Based Adaptive Routing With Guaranteed Delay Bounds","authors":"Jianmin Liu;Dan Li;Yongjun Xu","doi":"10.1109/TNET.2024.3425652","DOIUrl":"10.1109/TNET.2024.3425652","url":null,"abstract":"Real-time applications that require timely data delivery over wireless multi-hop networks within specified deadlines are growing increasingly. Effective routing protocols that can guarantee real-time QoS are crucial, yet challenging, due to the unpredictable variations in end-to-end delay caused by unreliable wireless channels. In such conditions, the upper bound on the end-to-end delay, i.e., worst-case end-to-end delay, should be guaranteed within the deadline. However, existing routing protocols with guaranteed delay bounds cannot strictly guarantee real-time QoS because they assume that the worst-case end-to-end delay is known and ignore the impact of routing policies on the worst-case end-to-end delay determination. In this paper, we relax this assumption and propose DDRL-ARGB, an Adaptive Routing with Guaranteed delay Bounds using Deep Distributional Reinforcement Learning (DDRL). DDRL-ARGB adopts DDRL to jointly determine the worst-case end-to-end delay and learn routing policies. To accurately determine worst-case end-to-end delay, DDRL-ARGB employs a quantile regression deep Q-network to learn the end-to-end delay cumulative distribution. To guarantee real-time QoS, DDRL-ARGB optimizes routing decisions under the constraint of worst-case end-to-end delay within the deadline. To improve traffic congestion, DDRL-ARGB considers the network congestion status when making routing decisions. Extensive results show that DDRL-ARGB can accurately calculate worst-case end-to-end delay, and can strictly guarantee real-time QoS under a small tolerant violation probability against two state-of-the-art routing protocols.","PeriodicalId":13443,"journal":{"name":"IEEE/ACM Transactions on Networking","volume":"32 6","pages":"4692-4706"},"PeriodicalIF":3.0,"publicationDate":"2024-07-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141720152","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
How Valuable is Your Data? Optimizing Client Recruitment in Federated Learning 您的数据有多大价值?在联合学习中优化客户招募
IF 3 3区 计算机科学 Q2 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2024-07-15 DOI: 10.1109/TNET.2024.3422264
Yichen Ruan;Xiaoxi Zhang;Carlee Joe-Wong
Federated learning allows distributed clients to train a shared machine learning model while preserving user privacy. In this framework, user devices (i.e., clients) perform local iterations of the learning algorithm on their data. These updates are periodically aggregated to form a shared model. Thus, a client represents the bundle of the user data, the device, and the user’s willingness to participate: since participating in federated learning requires clients to expend resources and reveal some information about their data, users may require some form of compensation to contribute to the training process. Recruiting more users generally results in higher accuracy, but slower completion time and higher cost. We propose the first work to theoretically analyze the resulting performance tradeoffs in deciding which clients to recruit for the federated learning algorithm. Our framework accounts for both accuracy (training and testing) and efficiency (completion time and cost) metrics. We provide solutions to this NP-Hard optimization problem and verify the value of client recruitment in experiments on synthetic and real-world data. The results of this work can serve as a guideline for the real-world deployment of federated learning and an initial investigation of the client recruitment problem.
联盟学习允许分布式客户端训练共享的机器学习模型,同时保护用户隐私。在这个框架中,用户设备(即客户端)对其数据执行本地迭代学习算法。这些更新会定期汇总形成一个共享模型。因此,客户端代表了用户数据、设备和用户参与意愿的捆绑:由于参与联合学习需要客户端耗费资源并透露其数据的某些信息,因此用户可能需要某种形式的补偿才能为训练过程做出贡献。招募更多用户通常会提高准确率,但完成时间较慢,成本较高。我们首次从理论上分析了在决定为联合学习算法招募哪些用户时产生的性能权衡。我们的框架同时考虑了准确性(训练和测试)和效率(完成时间和成本)指标。我们提供了这一 NP-Hard优化问题的解决方案,并在合成数据和真实世界数据的实验中验证了客户端招募的价值。这项工作的结果可作为联合学习在现实世界中部署的指南,以及对客户招募问题的初步研究。
{"title":"How Valuable is Your Data? Optimizing Client Recruitment in Federated Learning","authors":"Yichen Ruan;Xiaoxi Zhang;Carlee Joe-Wong","doi":"10.1109/TNET.2024.3422264","DOIUrl":"10.1109/TNET.2024.3422264","url":null,"abstract":"Federated learning allows distributed clients to train a shared machine learning model while preserving user privacy. In this framework, user devices (i.e., clients) perform local iterations of the learning algorithm on their data. These updates are periodically aggregated to form a shared model. Thus, a client represents the bundle of the user data, the device, and the user’s willingness to participate: since participating in federated learning requires clients to expend resources and reveal some information about their data, users may require some form of compensation to contribute to the training process. Recruiting more users generally results in higher accuracy, but slower completion time and higher cost. We propose the first work to theoretically analyze the resulting performance tradeoffs in deciding which clients to recruit for the federated learning algorithm. Our framework accounts for both accuracy (training and testing) and efficiency (completion time and cost) metrics. We provide solutions to this NP-Hard optimization problem and verify the value of client recruitment in experiments on synthetic and real-world data. The results of this work can serve as a guideline for the real-world deployment of federated learning and an initial investigation of the client recruitment problem.","PeriodicalId":13443,"journal":{"name":"IEEE/ACM Transactions on Networking","volume":"32 5","pages":"4207-4221"},"PeriodicalIF":3.0,"publicationDate":"2024-07-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141720155","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Asynchronous Decentralized Federated Learning for Heterogeneous Devices 异构设备的异步分散式联合学习
IF 3 3区 计算机科学 Q2 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2024-07-15 DOI: 10.1109/TNET.2024.3424444
Yunming Liao;Yang Xu;Hongli Xu;Min Chen;Lun Wang;Chunming Qiao
Data generated at the network edge can be processed locally by leveraging the emerging technology of Federated Learning (FL). However, non-IID local data will lead to degradation of model accuracy and the heterogeneity of edge nodes inevitably slows down model training efficiency. Moreover, to avoid the potential communication bottleneck in the parameter-server-based FL, we concentrate on the Decentralized Federated Learning (DFL) that performs distributed model training in Peer-to-Peer (P2P) manner. To address these challenges, we propose an asynchronous DFL system by incorporating neighbor selection and gradient push, termed AsyDFL. Specifically, we require each edge node to push gradients only to a subset of neighbors for resource efficiency. Herein, we first give a theoretical convergence analysis of AsyDFL under the complicated non-IID and heterogeneous scenario, and further design a priority-based algorithm to dynamically select neighbors for each edge node so as to achieve the trade-off between communication cost and model performance. We evaluate the performance of AsyDFL through extensive experiments on a physical platform with 30 NVIDIA Jetson edge devices. Evaluation results show that AsyDFL can reduce the communication cost by 57% and the completion time by about 35% for achieving the same test accuracy, and improve model accuracy by at least 6% under the non-IID scenario, compared to the baselines.
网络边缘生成的数据可利用新兴的联合学习(FL)技术进行本地处理。然而,非 IID 本地数据会导致模型精度下降,而且边缘节点的异质性不可避免地会降低模型训练效率。此外,为了避免基于参数服务器的分布式学习(FL)中潜在的通信瓶颈,我们专注于分散式分布式学习(DFL),它以点对点(P2P)的方式执行分布式模型训练。为了应对这些挑战,我们提出了一种结合了邻居选择和梯度推动的异步 DFL 系统,称为 AsyDFL。具体来说,我们要求每个边缘节点只向邻居的子集推送梯度,以提高资源效率。在此,我们首先给出了 AsyDFL 在复杂的非 IID 和异构场景下的理论收敛性分析,并进一步设计了一种基于优先级的算法来为每个边缘节点动态选择邻居,从而实现通信成本和模型性能之间的权衡。我们在装有 30 台英伟达 Jetson 边缘设备的物理平台上进行了大量实验,评估了 AsyDFL 的性能。评估结果表明,与基线相比,AsyDFL 可以在实现相同测试精度的情况下将通信成本降低 57%,将完成时间缩短约 35%,并在非 IID 场景下将模型精度提高至少 6%。
{"title":"Asynchronous Decentralized Federated Learning for Heterogeneous Devices","authors":"Yunming Liao;Yang Xu;Hongli Xu;Min Chen;Lun Wang;Chunming Qiao","doi":"10.1109/TNET.2024.3424444","DOIUrl":"10.1109/TNET.2024.3424444","url":null,"abstract":"Data generated at the network edge can be processed locally by leveraging the emerging technology of Federated Learning (FL). However, non-IID local data will lead to degradation of model accuracy and the heterogeneity of edge nodes inevitably slows down model training efficiency. Moreover, to avoid the potential communication bottleneck in the parameter-server-based FL, we concentrate on the Decentralized Federated Learning (DFL) that performs distributed model training in Peer-to-Peer (P2P) manner. To address these challenges, we propose an asynchronous DFL system by incorporating neighbor selection and gradient push, termed AsyDFL. Specifically, we require each edge node to push gradients only to a subset of neighbors for resource efficiency. Herein, we first give a theoretical convergence analysis of AsyDFL under the complicated non-IID and heterogeneous scenario, and further design a priority-based algorithm to dynamically select neighbors for each edge node so as to achieve the trade-off between communication cost and model performance. We evaluate the performance of AsyDFL through extensive experiments on a physical platform with 30 NVIDIA Jetson edge devices. Evaluation results show that AsyDFL can reduce the communication cost by 57% and the completion time by about 35% for achieving the same test accuracy, and improve model accuracy by at least 6% under the non-IID scenario, compared to the baselines.","PeriodicalId":13443,"journal":{"name":"IEEE/ACM Transactions on Networking","volume":"32 5","pages":"4535-4550"},"PeriodicalIF":3.0,"publicationDate":"2024-07-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141722383","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Federated PCA on Grassmann Manifold for IoT Anomaly Detection 用于物联网异常检测的格拉斯曼漫域上的联合 PCA
IF 3 3区 计算机科学 Q2 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2024-07-11 DOI: 10.1109/TNET.2024.3423780
Tung-Anh Nguyen;Long Tan Le;Tuan Dung Nguyen;Wei Bao;Suranga Seneviratne;Choong Seon Hong;Nguyen H. Tran
With the proliferation of the Internet of Things (IoT) and the rising interconnectedness of devices, network security faces significant challenges, especially from anomalous activities. While traditional machine learning-based intrusion detection systems (ML-IDS) effectively employ supervised learning methods, they possess limitations such as the requirement for labeled data and challenges with high dimensionality. Recent unsupervised ML-IDS approaches such as AutoEncoders and Generative Adversarial Networks (GAN) offer alternative solutions but pose challenges in deployment onto resource-constrained IoT devices and in interpretability. To address these concerns, this paper proposes a novel federated unsupervised anomaly detection framework – FedPCA – that leverages Principal Component Analysis (PCA) and the Alternating Directions Method Multipliers (ADMM) to learn common representations of distributed non-i.i.d. datasets. Building on the FedPCA framework, we propose two algorithms, FedPE in Euclidean space and FedPG on Grassmann manifolds. Our approach enables real-time threat detection and mitigation at the device level, enhancing network resilience while ensuring privacy. Moreover, the proposed algorithms are accompanied by theoretical convergence rates even under a sub-sampling scheme, a novel result. Experimental results on the UNSW-NB15 and TON-IoT datasets show that our proposed methods offer performance in anomaly detection comparable to non-linear baselines, while providing significant improvements in communication and memory efficiency, underscoring their potential for securing IoT networks.
随着物联网(IoT)的普及和设备互联程度的不断提高,网络安全面临着巨大的挑战,尤其是来自异常活动的挑战。虽然传统的基于机器学习的入侵检测系统(ML-IDS)有效地采用了监督学习方法,但它们也存在一些局限性,例如对标记数据的要求和高维度的挑战。自动编码器和生成对抗网络(GAN)等最新的无监督 ML-IDS 方法提供了替代解决方案,但在部署到资源受限的物联网设备和可解释性方面存在挑战。为了解决这些问题,本文提出了一种新颖的联合无监督异常检测框架--FedPCA,它利用主成分分析(PCA)和交替方向法乘法器(ADMM)来学习分布式非 i.i.d. 数据集的通用表示。在 FedPCA 框架的基础上,我们提出了两种算法:欧几里得空间中的 FedPE 和格拉斯曼流形上的 FedPG。我们的方法可在设备层面实现实时威胁检测和缓解,在确保隐私的同时增强网络弹性。此外,即使在子采样方案下,所提出的算法也具有理论收敛率,这是一项新成果。在 UNSW-NB15 和 TON-IoT 数据集上的实验结果表明,我们提出的方法在异常检测方面的性能可与非线性基线相媲美,同时在通信和内存效率方面也有显著提高,这凸显了它们在确保物联网网络安全方面的潜力。
{"title":"Federated PCA on Grassmann Manifold for IoT Anomaly Detection","authors":"Tung-Anh Nguyen;Long Tan Le;Tuan Dung Nguyen;Wei Bao;Suranga Seneviratne;Choong Seon Hong;Nguyen H. Tran","doi":"10.1109/TNET.2024.3423780","DOIUrl":"10.1109/TNET.2024.3423780","url":null,"abstract":"With the proliferation of the Internet of Things (IoT) and the rising interconnectedness of devices, network security faces significant challenges, especially from anomalous activities. While traditional machine learning-based intrusion detection systems (ML-IDS) effectively employ supervised learning methods, they possess limitations such as the requirement for labeled data and challenges with high dimensionality. Recent unsupervised ML-IDS approaches such as AutoEncoders and Generative Adversarial Networks (GAN) offer alternative solutions but pose challenges in deployment onto resource-constrained IoT devices and in interpretability. To address these concerns, this paper proposes a novel federated unsupervised anomaly detection framework – FedPCA – that leverages Principal Component Analysis (PCA) and the Alternating Directions Method Multipliers (ADMM) to learn common representations of distributed non-i.i.d. datasets. Building on the FedPCA framework, we propose two algorithms, FedPE in Euclidean space and FedPG on Grassmann manifolds. Our approach enables real-time threat detection and mitigation at the device level, enhancing network resilience while ensuring privacy. Moreover, the proposed algorithms are accompanied by theoretical convergence rates even under a sub-sampling scheme, a novel result. Experimental results on the UNSW-NB15 and TON-IoT datasets show that our proposed methods offer performance in anomaly detection comparable to non-linear baselines, while providing significant improvements in communication and memory efficiency, underscoring their potential for securing IoT networks.","PeriodicalId":13443,"journal":{"name":"IEEE/ACM Transactions on Networking","volume":"32 5","pages":"4456-4471"},"PeriodicalIF":3.0,"publicationDate":"2024-07-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141609172","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Coalition Formation-Based Sub-Channel Allocation in Full-Duplex-Enabled mmWave IABN With D2D 支持 D2D 的全双工毫米波 IABN 中基于联盟形成的子信道分配
IF 3 3区 计算机科学 Q2 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2024-07-09 DOI: 10.1109/TNET.2024.3423775
Zhongyu Ma;Yajing Wang;Zijun Wang;Guangjie Han;Zhanjun Hao;Qun Guo
One of the key techniques for future wireless network is full-duplex-enabled millimeter wave integrated access and backhaul network underlaying device-to-device communication, which is a 3GPP-inspired comprehensive paradigm for higher spectral efficiency and lower latency. However, the multi-user interference (MUI) and residual self-interference (RSI) become the major bottleneck before the commercial application of the system. To this end, we investigate the sub-channel allocation problem for this networking paradigm. To maximize the overall achievable rate under the considerations of MUI and RSI, the sub-channel allocation problem is firstly formulated as an integer nonlinear programming problem, which is intractable to search an optimal solution in polynomial time. Secondly, a coalition formation based sub-channel allocation (CFSA) algorithm is proposed, where the final partition of the sub-channel coalition is iteratively formed by the concurrent link players according to the two defined switching criterions. Thirdly, the properties of the proposed CFSA algorithm are analyzed from the perspectives of Nash stability and uniform convergence. Fourthly, the proposed CFSA algorithm is compared with other reference algorithms through abundant simulations, and superiorities including effectiveness, convergence and sub-optimality of the proposed CFSA algorithm are demonstrated through the kernel indicators.
未来无线网络的关键技术之一是全双工毫米波集成接入和回程网络,下层是设备到设备通信,这是一种受 3GPP 启发的综合范例,可实现更高的频谱效率和更低的延迟。然而,多用户干扰(MUI)和残余自干扰(RSI)成为该系统商业应用前的主要瓶颈。为此,我们研究了这种网络范例的子信道分配问题。为了在考虑 MUI 和 RSI 的情况下最大化总体可实现速率,首先将子信道分配问题表述为整数非线性编程问题,该问题难以在多项式时间内找到最优解。其次,提出了一种基于联盟形成的子信道分配(CFSA)算法,由并发链路参与者根据两个定义的切换标准迭代形成子信道联盟的最终分区。第三,从纳什稳定性和均匀收敛性角度分析了所提 CFSA 算法的特性。第四,通过大量仿真将所提出的CFSA算法与其他参考算法进行比较,通过内核指标证明了所提出的CFSA算法的有效性、收敛性和次优性等优越性。
{"title":"Coalition Formation-Based Sub-Channel Allocation in Full-Duplex-Enabled mmWave IABN With D2D","authors":"Zhongyu Ma;Yajing Wang;Zijun Wang;Guangjie Han;Zhanjun Hao;Qun Guo","doi":"10.1109/TNET.2024.3423775","DOIUrl":"10.1109/TNET.2024.3423775","url":null,"abstract":"One of the key techniques for future wireless network is full-duplex-enabled millimeter wave integrated access and backhaul network underlaying device-to-device communication, which is a 3GPP-inspired comprehensive paradigm for higher spectral efficiency and lower latency. However, the multi-user interference (MUI) and residual self-interference (RSI) become the major bottleneck before the commercial application of the system. To this end, we investigate the sub-channel allocation problem for this networking paradigm. To maximize the overall achievable rate under the considerations of MUI and RSI, the sub-channel allocation problem is firstly formulated as an integer nonlinear programming problem, which is intractable to search an optimal solution in polynomial time. Secondly, a coalition formation based sub-channel allocation (CFSA) algorithm is proposed, where the final partition of the sub-channel coalition is iteratively formed by the concurrent link players according to the two defined switching criterions. Thirdly, the properties of the proposed CFSA algorithm are analyzed from the perspectives of Nash stability and uniform convergence. Fourthly, the proposed CFSA algorithm is compared with other reference algorithms through abundant simulations, and superiorities including effectiveness, convergence and sub-optimality of the proposed CFSA algorithm are demonstrated through the kernel indicators.","PeriodicalId":13443,"journal":{"name":"IEEE/ACM Transactions on Networking","volume":"32 5","pages":"4503-4518"},"PeriodicalIF":3.0,"publicationDate":"2024-07-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141574631","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Toward Aggregated Payment Channel Networks 迈向聚合支付渠道网络
IF 3 3区 计算机科学 Q2 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2024-07-09 DOI: 10.1109/TNET.2024.3423000
Xiaoxue Zhang;Chen Qian
Payment channel networks (PCNs) have been designed and utilized to address the scalability challenge and throughput limitation of blockchains. It provides a high-throughput solution for blockchain-based payment systems. However, such “layer-2” blockchain solutions have their own problems: payment channels require a separate deposit for each channel of two users. Thus it significantly locks funds from users into particular channels without the flexibility of moving these funds across channels. In this paper, we proposed Aggregated Payment Channel Network (APCN), in which flexible funds are used as a per-user basis instead of a per-channel basis. To prevent users from misbehaving such as double-spending, APCN includes mechanisms that make use of hardware trusted execution environments (TEEs) to control funds, balances, and payments. The distributed routing protocol in APCN also addresses the congestion problem to further improve resource utilization. Our prototype implementation and simulation results show that APCN achieves significant improvements on transaction success ratio with low routing latency, compared to even the most advanced PCN routing.
支付通道网络(PCN)的设计和使用是为了解决区块链的可扩展性挑战和吞吐量限制。它为基于区块链的支付系统提供了一种高吞吐量解决方案。然而,这种 "第 2 层 "区块链解决方案也有自己的问题:支付通道要求两个用户的每个通道都有单独的存款。因此,它极大地将用户的资金锁定在特定通道中,而无法灵活地将这些资金跨通道转移。在本文中,我们提出了聚合支付通道网络(Aggregated Payment Channel Network,APCN),其中灵活的资金使用是以每个用户为基础,而不是以每个通道为基础。为防止用户出现双重消费等不当行为,APCN 包括利用硬件可信执行环境(TEE)控制资金、余额和支付的机制。APCN 中的分布式路由协议还解决了拥堵问题,进一步提高了资源利用率。我们的原型实施和仿真结果表明,与最先进的 PCN 路由相比,APCN 能够以较低的路由延迟显著提高交易成功率。
{"title":"Toward Aggregated Payment Channel Networks","authors":"Xiaoxue Zhang;Chen Qian","doi":"10.1109/TNET.2024.3423000","DOIUrl":"10.1109/TNET.2024.3423000","url":null,"abstract":"Payment channel networks (PCNs) have been designed and utilized to address the scalability challenge and throughput limitation of blockchains. It provides a high-throughput solution for blockchain-based payment systems. However, such “layer-2” blockchain solutions have their own problems: payment channels require a separate deposit for each channel of two users. Thus it significantly locks funds from users into particular channels without the flexibility of moving these funds across channels. In this paper, we proposed Aggregated Payment Channel Network (APCN), in which flexible funds are used as a per-user basis instead of a per-channel basis. To prevent users from misbehaving such as double-spending, APCN includes mechanisms that make use of hardware trusted execution environments (TEEs) to control funds, balances, and payments. The distributed routing protocol in APCN also addresses the congestion problem to further improve resource utilization. Our prototype implementation and simulation results show that APCN achieves significant improvements on transaction success ratio with low routing latency, compared to even the most advanced PCN routing.","PeriodicalId":13443,"journal":{"name":"IEEE/ACM Transactions on Networking","volume":"32 5","pages":"4333-4348"},"PeriodicalIF":3.0,"publicationDate":"2024-07-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141574632","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Charting the Complexity Landscape of Compiling Packet Programs to Reconfigurable Switches 将分组程序编译到可重构交换机的复杂性图谱
IF 3 3区 计算机科学 Q2 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2024-07-09 DOI: 10.1109/TNET.2024.3424337
Balázs Vass;Erika R. Bérczi-Kovács;Ádám Fraknói;Costin Raiciu;Gábor Rétvári
P4 is a widely used Domain-specific Language for Programmable Data Planes. A critical step in P4 compilation is finding a feasible and efficient mapping of the high-level P4 source code constructs to the physical resources exposed by the underlying hardware, while meeting data and control flow dependencies in the program. In this paper, we take a new look at the algorithmic aspects of this problem, with the motivation to understand the fundamental theoretical limits and obtain better P4 pipeline embeddings, and to speed up practical P4 compilation times for RMT and dRMT target architectures. We report mixed results: we find that P4 compilation is computationally hard even in a severely relaxed formulation, and there is no polynomial-time approximation of arbitrary precision (unless $mathcal {P}$ = $mathcal {N}$ $mathcal {P}$ ), while the good news is that, despite its inherent complexity, P4 compilation is approximable in linear time with a small constant bound even for the most complex, nearly real-life models.
P4 是一种广泛应用于可编程数据平面的特定领域语言。P4 编译的一个关键步骤是在满足程序中的数据和控制流依赖性的同时,找到将高级 P4 源代码结构与底层硬件所暴露的物理资源进行可行、高效映射的方法。在本文中,我们重新审视了这一问题的算法方面,目的是了解基本理论限制,获得更好的 P4 流水线嵌入,并加快 RMT 和 dRMT 目标架构的实际 P4 编译时间。我们报告了喜忧参半的结果:我们发现 P4 编译即使在严重松弛的表述中也很难计算,而且不存在任意精度的多项式时间近似(除非 $mathcal {P}$ = $mathcal {N}$ $mathcal {P}$ ),而好消息是,尽管其固有的复杂性,P4 编译在线性时间内是可近似的,即使对于最复杂、接近现实生活的模型,也有一个小的常数约束。
{"title":"Charting the Complexity Landscape of Compiling Packet Programs to Reconfigurable Switches","authors":"Balázs Vass;Erika R. Bérczi-Kovács;Ádám Fraknói;Costin Raiciu;Gábor Rétvári","doi":"10.1109/TNET.2024.3424337","DOIUrl":"10.1109/TNET.2024.3424337","url":null,"abstract":"P4 is a widely used Domain-specific Language for Programmable Data Planes. A critical step in P4 compilation is finding a feasible and efficient mapping of the high-level P4 source code constructs to the physical resources exposed by the underlying hardware, while meeting data and control flow dependencies in the program. In this paper, we take a new look at the algorithmic aspects of this problem, with the motivation to understand the fundamental theoretical limits and obtain better P4 pipeline embeddings, and to speed up practical P4 compilation times for RMT and dRMT target architectures. We report mixed results: we find that P4 compilation is computationally hard even in a severely relaxed formulation, and there is no polynomial-time approximation of arbitrary precision (unless \u0000<inline-formula> <tex-math>$mathcal {P}$ </tex-math></inline-formula>\u0000=\u0000<inline-formula> <tex-math>$mathcal {N}$ </tex-math></inline-formula>\u0000<inline-formula> <tex-math>$mathcal {P}$ </tex-math></inline-formula>\u0000), while the good news is that, despite its inherent complexity, P4 compilation is approximable in linear time with a small constant bound even for the most complex, nearly real-life models.","PeriodicalId":13443,"journal":{"name":"IEEE/ACM Transactions on Networking","volume":"32 5","pages":"4519-4534"},"PeriodicalIF":3.0,"publicationDate":"2024-07-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141574633","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Releasing the Power of In-Network Aggregation With Aggregator-Aware Routing Optimization 利用聚合器感知路由优化释放网内聚合的威力
IF 3 3区 计算机科学 Q2 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2024-07-08 DOI: 10.1109/TNET.2024.3423380
Shouxi Luo;Xiaoyu Yu;Ke Li;Huanlai Xing
By offloading partial of the aggregation computation from the logical central parameter servers to network devices like programmable switches, In-Network Aggregation (INA) is a general, effective, and widely used approach to reduce network load thus alleviating the communication bottlenecks suffered by large-scale distributed training. Given the fact that INA would take effects if and only if associated traffic goes through the same in-network aggregator, the key to taking advantage of INA lies in routing control. However, existing proposals fall short in doing so and thus are far from optimal, since they select routes for INA-supported traffic without comprehensively considering the characteristics, limitations, and requirements of the network environment, aggregator hardware, and distributed training jobs. To fill the gap, in this paper, we systematically establish a mathematical model to formulate i) the up-down routing constraints of Clos datacenter networks, ii) the limitations raised by modern programmable switches’ pipeline hardware structure, and iii) the various aggregator-aware routing optimization goals required by distributed training tasks under different parallelism strategies. Based on the model, we develop ARO, an Aggregator-aware Routing Optimization solution for INA-accelerated distributed training applications. To be efficient, ARO involves a suite of search space pruning designs, by using the model’s characteristics, yielding tens of times improvement in the solving time with trivial performance loss. Extensive experiments show that ARO is able to find near-optimal results for large-scale routing optimization in tens of seconds, achieving $1.8sim 4.0times $ higher throughput than the state-of-the-art solution.
通过将部分聚合计算从逻辑中心参数服务器卸载到可编程交换机等网络设备上,网内聚合(INA)是一种通用、有效且广泛使用的方法,可降低网络负载,从而缓解大规模分布式培训所遭遇的通信瓶颈。鉴于 INA 只有在相关流量通过同一网内聚合器时才会生效,因此利用 INA 的关键在于路由控制。然而,现有的建议并没有做到这一点,因此远未达到最佳效果,因为它们在为支持 INA 的流量选择路由时,没有全面考虑网络环境、聚合器硬件和分布式培训工作的特点、限制和要求。为了填补这一空白,我们在本文中系统地建立了一个数学模型,以阐明 i) Clos 数据中心网络的上下路由限制;ii) 现代可编程交换机流水线硬件结构带来的限制;iii) 不同并行策略下分布式训练任务所需的各种聚合器感知路由优化目标。基于该模型,我们开发了针对 INA 加速分布式训练应用的聚合器感知路由优化解决方案 ARO。为了提高效率,ARO 利用该模型的特点进行了一系列搜索空间剪枝设计,从而在性能损失不大的情况下将求解时间缩短了数十倍。大量实验表明,ARO能够在数十秒内为大规模路由优化找到接近最优的结果,比最先进的解决方案吞吐量高出1.8美元(4.0倍)。
{"title":"Releasing the Power of In-Network Aggregation With Aggregator-Aware Routing Optimization","authors":"Shouxi Luo;Xiaoyu Yu;Ke Li;Huanlai Xing","doi":"10.1109/TNET.2024.3423380","DOIUrl":"10.1109/TNET.2024.3423380","url":null,"abstract":"By offloading partial of the aggregation computation from the logical central parameter servers to network devices like programmable switches, In-Network Aggregation (INA) is a general, effective, and widely used approach to reduce network load thus alleviating the communication bottlenecks suffered by large-scale distributed training. Given the fact that INA would take effects if and only if associated traffic goes through the same in-network aggregator, the key to taking advantage of INA lies in routing control. However, existing proposals fall short in doing so and thus are far from optimal, since they select routes for INA-supported traffic without comprehensively considering the characteristics, limitations, and requirements of the network environment, aggregator hardware, and distributed training jobs. To fill the gap, in this paper, we systematically establish a mathematical model to formulate i) the up-down routing constraints of Clos datacenter networks, ii) the limitations raised by modern programmable switches’ pipeline hardware structure, and iii) the various aggregator-aware routing optimization goals required by distributed training tasks under different parallelism strategies. Based on the model, we develop ARO, an Aggregator-aware Routing Optimization solution for INA-accelerated distributed training applications. To be efficient, ARO involves a suite of search space pruning designs, by using the model’s characteristics, yielding tens of times improvement in the solving time with trivial performance loss. Extensive experiments show that ARO is able to find near-optimal results for large-scale routing optimization in tens of seconds, achieving \u0000<inline-formula> <tex-math>$1.8sim 4.0times $ </tex-math></inline-formula>\u0000 higher throughput than the state-of-the-art solution.","PeriodicalId":13443,"journal":{"name":"IEEE/ACM Transactions on Networking","volume":"32 5","pages":"4488-4502"},"PeriodicalIF":3.0,"publicationDate":"2024-07-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141574634","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Linear Bandits With Side Observations on Networks 网络侧观察线性匪帮
IF 3 3区 计算机科学 Q2 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2024-07-08 DOI: 10.1109/TNET.2024.3422323
Avik Kar;Rahul Singh;Fang Liu;Xin Liu;Ness B. Shroff
We investigate linear bandits in a network setting in the presence of side-observations across nodes in order to design recommendation algorithms for users connected via social networks. Users in social networks respond to their friends’ activity and, hence, provide information about each other’s preferences. In our model, when a learning algorithm recommends an article to a user, not only does it observe her response (e.g., an ad click) but also the side-observations, i.e., the response of her neighbors if they were presented with the same article. We model these observation dependencies by a graph $mathcal {G}$ in which nodes correspond to users and edges to social links. We derive a problem/instance-dependent lower-bound on the regret of any consistent algorithm. We propose an optimization-based data-driven learning algorithm that utilizes the structure of $mathcal {G}$ in order to make recommendations to users and show that it is asymptotically optimal, in the sense that its regret matches the lower-bound as the number of rounds $Tto infty $ . We show that this asymptotically optimal regret is upper-bounded as $Oleft ({{|chi (mathcal {G})|log T}}right)$ , where $|chi (mathcal {G})|$ is the domination number of $mathcal {G}$ . In contrast, a naive application of the existing learning algorithms results in $Oleft ({{Nlog T}}right)$ regret, where N is the number of users.
我们研究了在存在跨节点侧观察的网络环境中的线性匪帮,以便为通过社交网络连接的用户设计推荐算法。社交网络中的用户会对其好友的活动做出反应,从而提供有关彼此偏好的信息。在我们的模型中,当学习算法向用户推荐一篇文章时,不仅要观察用户的反应(如广告点击),还要观察旁观者的反应,即用户邻居在看到同一篇文章时的反应。我们通过图 $mathcal {G}$来模拟这些观察依赖关系,其中节点对应用户,边对应社交链接。我们对任何一致算法的遗憾值推导出了一个与问题/实例相关的下限。我们提出了一种基于优化的数据驱动学习算法,该算法利用 $mathcal {G}$ 的结构向用户进行推荐,并证明该算法是渐进最优的,即其遗憾与下限相匹配的回合数为 $Tto infty $。我们证明了这种渐进最优遗憾的上界为 $Oleft ({{|chi (mathcal {G})|log T}}right)$ ,其中 $|chi (mathcal {G})|$ 是 $mathcal {G}$ 的支配数。 相比之下,现有学习算法的天真应用会导致 $Oleft ({{Nlog T}}right)$ 遗憾,其中 N 是用户数量。
{"title":"Linear Bandits With Side Observations on Networks","authors":"Avik Kar;Rahul Singh;Fang Liu;Xin Liu;Ness B. Shroff","doi":"10.1109/TNET.2024.3422323","DOIUrl":"10.1109/TNET.2024.3422323","url":null,"abstract":"We investigate linear bandits in a network setting in the presence of side-observations across nodes in order to design recommendation algorithms for users connected via social networks. Users in social networks respond to their friends’ activity and, hence, provide information about each other’s preferences. In our model, when a learning algorithm recommends an article to a user, not only does it observe her response (e.g., an ad click) but also the side-observations, i.e., the response of her neighbors if they were presented with the same article. We model these observation dependencies by a graph \u0000<inline-formula> <tex-math>$mathcal {G}$ </tex-math></inline-formula>\u0000 in which nodes correspond to users and edges to social links. We derive a problem/instance-dependent lower-bound on the regret of any consistent algorithm. We propose an optimization-based data-driven learning algorithm that utilizes the structure of \u0000<inline-formula> <tex-math>$mathcal {G}$ </tex-math></inline-formula>\u0000 in order to make recommendations to users and show that it is asymptotically optimal, in the sense that its regret matches the lower-bound as the number of rounds \u0000<inline-formula> <tex-math>$Tto infty $ </tex-math></inline-formula>\u0000. We show that this asymptotically optimal regret is upper-bounded as \u0000<inline-formula> <tex-math>$Oleft ({{|chi (mathcal {G})|log T}}right)$ </tex-math></inline-formula>\u0000, where \u0000<inline-formula> <tex-math>$|chi (mathcal {G})|$ </tex-math></inline-formula>\u0000 is the domination number of \u0000<inline-formula> <tex-math>$mathcal {G}$ </tex-math></inline-formula>\u0000. In contrast, a naive application of the existing learning algorithms results in \u0000<inline-formula> <tex-math>$Oleft ({{Nlog T}}right)$ </tex-math></inline-formula>\u0000 regret, where N is the number of users.","PeriodicalId":13443,"journal":{"name":"IEEE/ACM Transactions on Networking","volume":"32 5","pages":"4222-4237"},"PeriodicalIF":3.0,"publicationDate":"2024-07-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141574636","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
IEEE/ACM Transactions on Networking
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1