首页 > 最新文献

IEEE/ACM Transactions on Networking最新文献

英文 中文
AutoTomo: Learning-Based Traffic Estimator Incorporating Network Tomography AutoTomo:基于学习的流量估算器,融入网络断层扫描技术
IF 3 3区 计算机科学 Q2 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2024-07-08 DOI: 10.1109/TNET.2024.3424446
Yan Qiao;Kui Wu;Xinyu Yuan
Estimating the Traffic Matrix (TM) is a critical yet resource-intensive process in network management. With the advent of deep learning models, we now have the potential to learn the inverse mapping from link loads to origin-destination (OD) flows more efficiently and accurately. However, a significant hurdle is that all current learning-based techniques necessitate a training dataset covering a comprehensive TM for a specific duration. This requirement is often unfeasible in practical scenarios. This paper addresses this complex learning challenge, specifically when dealing with incomplete and biased TM data. Our initial approach involves parameterizing the unidentified flows, thereby transforming this problem of target-deficient learning into an empirical optimization problem that integrates tomography constraints. Following this, we introduce AutoTomo, a learning-based architecture designed to optimize both the inverse mapping and the unexplored flows during the model’s training phase. We also propose an innovative observation selection algorithm, which aids network operators in gathering the most insightful measurements with limited device resources. We evaluate AutoTomo with three public traffic datasets Abilene, GÉANT and Cernet. The results reveal that AutoTomo outperforms five state-of-the-art learning-based TM estimation techniques. With complete training data, AutoTomo enhances the accuracy of the most efficient method by 15%, while it shows an improvement between 30% to 56% with incomplete training data. Furthermore, AutoTomo exhibits rapid testing speed, making it a viable tool for real-time TM estimation.
{"title":"AutoTomo: Learning-Based Traffic Estimator Incorporating Network Tomography","authors":"Yan Qiao;Kui Wu;Xinyu Yuan","doi":"10.1109/TNET.2024.3424446","DOIUrl":"10.1109/TNET.2024.3424446","url":null,"abstract":"Estimating the Traffic Matrix (TM) is a critical yet resource-intensive process in network management. With the advent of deep learning models, we now have the potential to learn the inverse mapping from link loads to origin-destination (OD) flows more efficiently and accurately. However, a significant hurdle is that all current learning-based techniques necessitate a training dataset covering a comprehensive TM for a specific duration. This requirement is often unfeasible in practical scenarios. This paper addresses this complex learning challenge, specifically when dealing with incomplete and biased TM data. Our initial approach involves parameterizing the unidentified flows, thereby transforming this problem of target-deficient learning into an empirical optimization problem that integrates tomography constraints. Following this, we introduce AutoTomo, a learning-based architecture designed to optimize both the inverse mapping and the unexplored flows during the model’s training phase. We also propose an innovative observation selection algorithm, which aids network operators in gathering the most insightful measurements with limited device resources. We evaluate AutoTomo with three public traffic datasets Abilene, GÉANT and Cernet. The results reveal that AutoTomo outperforms five state-of-the-art learning-based TM estimation techniques. With complete training data, AutoTomo enhances the accuracy of the most efficient method by 15%, while it shows an improvement between 30% to 56% with incomplete training data. Furthermore, AutoTomo exhibits rapid testing speed, making it a viable tool for real-time TM estimation.","PeriodicalId":13443,"journal":{"name":"IEEE/ACM Transactions on Networking","volume":"32 6","pages":"4644-4659"},"PeriodicalIF":3.0,"publicationDate":"2024-07-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141574637","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Inter-Temporal Reward Strategies in the Presence of Strategic Ethical Hackers 存在战略性道德黑客时的跨时空奖励策略
IF 3 3区 计算机科学 Q2 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2024-07-08 DOI: 10.1109/TNET.2024.3422922
Jing Hou;Xuyu Wang;Amy Z. Zeng
A skyrocketing increase in cyber-attacks significantly elevates the importance of secure software development. Companies launch various bug-bounty programs to reward ethical hackers for identifying potential vulnerabilities in their systems before malicious hackers can exploit them. One of the most difficult decisions in bug-bounty programs is appropriately rewarding ethical hackers. This paper develops a model of an inter-temporal reward strategy with endogenous e-hacker behaviors. We formulate a novel game model to characterize the interactions between a software vendor and multiple heterogeneous ethical hackers. The optimal levels of rewards are discussed under different reward strategies. The impacts of ethical hackers’ strategic bug-hoarding and their competitive and collaborative behaviors on the performance of the program are also evaluated. We demonstrate the effectiveness of the inter-temporal reward mechanism in attracting ethical hackers and encouraging early bug reports. Our results indicate that ignoring the ethical hackers’ strategic behaviors could result in setting inappropriate rewards, which may inadvertently encourage them to hoard bugs for higher rewards. In addition, a more skilled e-hacker is more likely to delay their reporting and less motivated to work collaboratively with other e-hackers. Moreover, the vendor gains more from e-hacker collaboration when it could significantly increase the speed or probability of uncovering difficult-to-detect vulnerabilities.
网络攻击的急剧增加大大提高了安全软件开发的重要性。公司推出各种漏洞赏金计划,奖励道德黑客在恶意黑客利用漏洞之前发现系统中的潜在漏洞。在漏洞赏金计划中,最困难的决策之一就是适当奖励道德黑客。本文建立了一个具有内生电子黑客行为的跨期奖励策略模型。我们建立了一个新颖的博弈模型来描述软件供应商与多个异质道德黑客之间的互动。我们讨论了不同奖励策略下的最优奖励水平。此外,我们还评估了道德黑客策略性囤积漏洞及其竞争和合作行为对程序性能的影响。我们证明了跨时空奖励机制在吸引道德黑客和鼓励早期漏洞报告方面的有效性。我们的结果表明,忽视道德黑客的策略行为可能会导致设置不恰当的奖励,从而无意中鼓励他们囤积漏洞以获得更高的奖励。此外,技术更高超的电子黑客更有可能延迟报告,与其他电子黑客合作的积极性也会降低。此外,当电子黑客的合作能显著提高发现难以发现的漏洞的速度或概率时,供应商就能从电子黑客的合作中获得更多收益。
{"title":"Inter-Temporal Reward Strategies in the Presence of Strategic Ethical Hackers","authors":"Jing Hou;Xuyu Wang;Amy Z. Zeng","doi":"10.1109/TNET.2024.3422922","DOIUrl":"10.1109/TNET.2024.3422922","url":null,"abstract":"A skyrocketing increase in cyber-attacks significantly elevates the importance of secure software development. Companies launch various bug-bounty programs to reward ethical hackers for identifying potential vulnerabilities in their systems before malicious hackers can exploit them. One of the most difficult decisions in bug-bounty programs is appropriately rewarding ethical hackers. This paper develops a model of an inter-temporal reward strategy with endogenous e-hacker behaviors. We formulate a novel game model to characterize the interactions between a software vendor and multiple heterogeneous ethical hackers. The optimal levels of rewards are discussed under different reward strategies. The impacts of ethical hackers’ strategic bug-hoarding and their competitive and collaborative behaviors on the performance of the program are also evaluated. We demonstrate the effectiveness of the inter-temporal reward mechanism in attracting ethical hackers and encouraging early bug reports. Our results indicate that ignoring the ethical hackers’ strategic behaviors could result in setting inappropriate rewards, which may inadvertently encourage them to hoard bugs for higher rewards. In addition, a more skilled e-hacker is more likely to delay their reporting and less motivated to work collaboratively with other e-hackers. Moreover, the vendor gains more from e-hacker collaboration when it could significantly increase the speed or probability of uncovering difficult-to-detect vulnerabilities.","PeriodicalId":13443,"journal":{"name":"IEEE/ACM Transactions on Networking","volume":"32 5","pages":"4427-4440"},"PeriodicalIF":3.0,"publicationDate":"2024-07-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141574635","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Toward Full-Coverage and Low-Overhead Profiling of Network-Stack Latency 实现网络堆栈延迟的全覆盖和低开销剖析
IF 3 3区 计算机科学 Q2 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2024-07-03 DOI: 10.1109/TNET.2024.3421327
Xiang Chen;Hongyan Liu;Wenbin Zhang;Qun Huang;Dong Zhang;Haifeng Zhou;Xuan Liu;Chunming Wu
In modern data center networks (DCNs), network-stack processing denotes a large portion of the end-to-end latency of TCP flows. So profiling network-stack latency anomalies has been considered as a crucial part in DCN performance diagnosis and troubleshooting. In particular, such profiling requires full coverage (i.e., profiling every TCP packet) and low overhead (i.e., profiling should avoid high CPU consumption in end-hosts). However, existing solutions rely on system calls or tracepoints in end-hosts to implement network-stack latency profiling, leading to either low coverage or high overhead. We propose Torp, a framework that offers full-coverage and low-overhead profiling of network-stack latency. Our key idea is to offload as much of the profiling from costly system calls or tracepoints to the Torp agent built on eBPF modules, and further to include a Torp handler on the ToR switch to accelerate the remaining profiling operations. Torp efficiently coordinates the ToR switch and the Torp agent on end-hosts to jointly execute the entire latency profiling task. We have implemented Torp on $32times 100$ Gbps Tofino switches. Testbed experiments indicate that Torp achieves full coverage and orders of magnitude lower host-side overhead compared to other solutions.
在现代数据中心网络(DCN)中,网络堆栈处理占 TCP 流量端到端延迟的很大一部分。因此,剖析网络堆栈延迟异常已被视为 DCN 性能诊断和故障排除的关键部分。特别是,这种剖析要求全面覆盖(即剖析每个 TCP 数据包)和低开销(即剖析应避免终端主机的高 CPU 消耗)。然而,现有的解决方案依赖于终端主机中的系统调用或跟踪点来实现网络堆栈延迟剖析,导致覆盖率低或开销大。我们提出了 Torp 框架,它能提供全覆盖、低开销的网络栈延迟剖析。我们的主要想法是将大部分剖析工作从昂贵的系统调用或跟踪点卸载到基于 eBPF 模块的 Torp 代理,并进一步在 ToR 交换机上加入 Torp 处理程序,以加速剩余的剖析操作。Torp 能有效协调 ToR 交换机和终端主机上的 Torp 代理,共同执行整个延迟剖析任务。我们已经在 32/times 100$ Gbps 的 Tofino 交换机上实现了 Torp。试验台实验表明,与其他解决方案相比,Torp 实现了全面覆盖,并且主机端开销低了几个数量级。
{"title":"Toward Full-Coverage and Low-Overhead Profiling of Network-Stack Latency","authors":"Xiang Chen;Hongyan Liu;Wenbin Zhang;Qun Huang;Dong Zhang;Haifeng Zhou;Xuan Liu;Chunming Wu","doi":"10.1109/TNET.2024.3421327","DOIUrl":"10.1109/TNET.2024.3421327","url":null,"abstract":"In modern data center networks (DCNs), network-stack processing denotes a large portion of the end-to-end latency of TCP flows. So profiling network-stack latency anomalies has been considered as a crucial part in DCN performance diagnosis and troubleshooting. In particular, such profiling requires full coverage (i.e., profiling every TCP packet) and low overhead (i.e., profiling should avoid high CPU consumption in end-hosts). However, existing solutions rely on system calls or tracepoints in end-hosts to implement network-stack latency profiling, leading to either low coverage or high overhead. We propose Torp, a framework that offers full-coverage and low-overhead profiling of network-stack latency. Our key idea is to offload as much of the profiling from costly system calls or tracepoints to the Torp agent built on eBPF modules, and further to include a Torp handler on the ToR switch to accelerate the remaining profiling operations. Torp efficiently coordinates the ToR switch and the Torp agent on end-hosts to jointly execute the entire latency profiling task. We have implemented Torp on \u0000<inline-formula> <tex-math>$32times 100$ </tex-math></inline-formula>\u0000Gbps Tofino switches. Testbed experiments indicate that Torp achieves full coverage and orders of magnitude lower host-side overhead compared to other solutions.","PeriodicalId":13443,"journal":{"name":"IEEE/ACM Transactions on Networking","volume":"32 5","pages":"4441-4455"},"PeriodicalIF":3.0,"publicationDate":"2024-07-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141549420","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Minimizing Edge Caching Service Costs Through Regret-Optimal Online Learning 通过回归优化在线学习最大限度降低边缘缓存服务成本
IF 3 3区 计算机科学 Q2 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2024-07-03 DOI: 10.1109/TNET.2024.3420758
Guocong Quan;Atilla Eryilmaz;Ness B. Shroff
Edge caching has been widely implemented to efficiently serve data requests from end users. Numerous edge caching policies have been proposed to adaptively update the cache contents based on various statistics. One critical statistic is the miss cost, which could measure the latency or the bandwidth/energy consumption to resolve the cache miss. Existing caching policies typically assume that the miss cost for each data item is fixed and known. However, in real systems, they could be random with unknown statistics. A promising approach would be to use online learning to estimate the unknown statistics of these random costs, and make caching decisions adaptively. Unfortunately, conventional learning techniques cannot be directly applied, because the caching problem has additional cache capacity and cache update constraints that are not covered in traditional learning settings. In this work, we resolve these issues by developing a novel edge caching policy that learns uncertain miss costs efficiently, and is shown to be asymptotically optimal. We first derive an asymptotic lower bound on the achievable regret. We then design a Kullback-Leibler lower confidence bound (KL-LCB) based edge caching policy, which adaptively learns the random miss costs by following the “optimism in the face of uncertainty” principle. By employing a novel analysis that accounts for the new constraints and the dynamics of the setting, we prove that the regret of the proposed policy matches the regret lower bound, thus showing asymptotic optimality. Further, via numerical experiments we demonstrate the performance improvements of our policy over natural benchmarks.
为有效满足终端用户的数据请求,边缘缓存已得到广泛应用。人们提出了许多边缘缓存策略,以根据各种统计数据自适应地更新缓存内容。其中一个关键的统计数据是未命中成本,它可以衡量解决缓存未命中问题的延迟或带宽/能耗。现有的缓存策略通常假定每个数据项的未命中成本是固定和已知的。然而,在实际系统中,它们可能是随机的,具有未知的统计数据。一种可行的方法是利用在线学习来估计这些随机成本的未知统计量,并自适应地做出缓存决策。遗憾的是,传统的学习技术无法直接应用,因为缓存问题还有额外的缓存容量和缓存更新限制,而这些限制在传统的学习设置中是无法覆盖的。在这项工作中,我们通过开发一种新型边缘缓存策略来解决这些问题,这种策略能有效地学习不确定的未命中成本,并被证明是渐进最优的。我们首先推导出了可实现遗憾的渐进下限。然后,我们设计了一种基于 Kullback-Leibler 置信度下限 (KL-LCB) 的边缘缓存策略,该策略通过遵循 "面对不确定性保持乐观 "的原则,自适应地学习随机遗漏成本。通过采用一种考虑到新约束条件和动态设置的新分析方法,我们证明了所提策略的遗憾与遗憾下限相匹配,从而显示出渐进最优性。此外,通过数值实验,我们证明了我们的政策在自然基准上的性能改进。
{"title":"Minimizing Edge Caching Service Costs Through Regret-Optimal Online Learning","authors":"Guocong Quan;Atilla Eryilmaz;Ness B. Shroff","doi":"10.1109/TNET.2024.3420758","DOIUrl":"10.1109/TNET.2024.3420758","url":null,"abstract":"Edge caching has been widely implemented to efficiently serve data requests from end users. Numerous edge caching policies have been proposed to adaptively update the cache contents based on various statistics. One critical statistic is the miss cost, which could measure the latency or the bandwidth/energy consumption to resolve the cache miss. Existing caching policies typically assume that the miss cost for each data item is fixed and known. However, in real systems, they could be random with unknown statistics. A promising approach would be to use online learning to estimate the unknown statistics of these random costs, and make caching decisions adaptively. Unfortunately, conventional learning techniques cannot be directly applied, because the caching problem has additional cache capacity and cache update constraints that are not covered in traditional learning settings. In this work, we resolve these issues by developing a novel edge caching policy that learns uncertain miss costs efficiently, and is shown to be asymptotically optimal. We first derive an asymptotic lower bound on the achievable regret. We then design a Kullback-Leibler lower confidence bound (KL-LCB) based edge caching policy, which adaptively learns the random miss costs by following the “optimism in the face of uncertainty” principle. By employing a novel analysis that accounts for the new constraints and the dynamics of the setting, we prove that the regret of the proposed policy matches the regret lower bound, thus showing asymptotic optimality. Further, via numerical experiments we demonstrate the performance improvements of our policy over natural benchmarks.","PeriodicalId":13443,"journal":{"name":"IEEE/ACM Transactions on Networking","volume":"32 5","pages":"4349-4364"},"PeriodicalIF":3.0,"publicationDate":"2024-07-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141549421","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Circling Reduction Algorithm for Cloud Edge Traffic Allocation Under the 95th Percentile Billing 第 95 百分位数计费下云边缘流量分配的绕圈缩减算法
IF 3 3区 计算机科学 Q2 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2024-07-01 DOI: 10.1109/TNET.2024.3415649
Pengxiang Zhao;Jintao You;Xiaoming Yuan
In cloud ecosystems, managing bandwidth costs is pivotal for both operational efficiency and service quality. This paper tackles the cloud-edge traffic allocation problem, particularly optimizing for the 95th percentile billing scheme, which is widely employed across various cloud computing scenarios by Internet Service Providers but has yet to be efficiently addressed. We introduce a mathematical model for this issue, confirm its NP-hard complexity, and reformulate it as a mixed-integer programming (MIP). The intricacy of the problem is further magnified by the scale of the cloud ecosystem, involving numerous data centers, client groups, and long billing cycles. Based on a structural analysis of our MIP model, we propose a two-stage solution strategy that retains optimality. We introduce the Circling Reduction Algorithm (CRA), a polynomial-time algorithm based on a rigorously derived lower bound for the objective value, to efficiently determine the binary variables in the first stage, while the remaining linear programming problem in the second stage can be easily resolved. Using the CRA, we develop algorithms for both offline and online traffic allocation scenarios and validate them on real-world datasets from the cloud provider under study. In offline scenarios, our method delivers up to 66.34% cost savings compared to a commercial solver, while also significantly improving computational speed. Additionally, it achieves an average of 14% cost reduction over the current solution of the studied cloud provider. For online scenarios, we achieve an average cost-saving of 8.64% while staying within a 9% gap of the theoretical optimum.
在云生态系统中,带宽成本管理对运营效率和服务质量都至关重要。本文探讨了云边缘流量分配问题,尤其是优化第 95 百分位数计费方案的问题,该方案被互联网服务提供商广泛应用于各种云计算场景,但尚未得到有效解决。我们为这一问题引入了一个数学模型,证实了它的 NP 难度,并将其重新表述为混合整数编程 (MIP)。由于云生态系统的规模庞大,涉及众多数据中心、客户群和较长的计费周期,该问题的复杂性被进一步放大。基于对 MIP 模型的结构分析,我们提出了一种保持最优性的两阶段求解策略。我们引入了循环还原算法(CRA),这是一种基于严格推导的目标值下限的多项式时间算法,可高效确定第一阶段的二进制变量,而第二阶段的剩余线性规划问题则可以轻松解决。利用 CRA,我们开发了适用于离线和在线流量分配场景的算法,并在所研究的云提供商提供的真实数据集上进行了验证。在离线场景中,与商用求解器相比,我们的方法最多可节省 66.34% 的成本,同时还显著提高了计算速度。此外,与所研究的云提供商的现有解决方案相比,我们的方法平均降低了 14% 的成本。对于在线场景,我们实现了 8.64% 的平均成本节约,同时与理论最佳值保持在 9% 的差距之内。
{"title":"Circling Reduction Algorithm for Cloud Edge Traffic Allocation Under the 95th Percentile Billing","authors":"Pengxiang Zhao;Jintao You;Xiaoming Yuan","doi":"10.1109/TNET.2024.3415649","DOIUrl":"10.1109/TNET.2024.3415649","url":null,"abstract":"In cloud ecosystems, managing bandwidth costs is pivotal for both operational efficiency and service quality. This paper tackles the cloud-edge traffic allocation problem, particularly optimizing for the 95th percentile billing scheme, which is widely employed across various cloud computing scenarios by Internet Service Providers but has yet to be efficiently addressed. We introduce a mathematical model for this issue, confirm its NP-hard complexity, and reformulate it as a mixed-integer programming (MIP). The intricacy of the problem is further magnified by the scale of the cloud ecosystem, involving numerous data centers, client groups, and long billing cycles. Based on a structural analysis of our MIP model, we propose a two-stage solution strategy that retains optimality. We introduce the Circling Reduction Algorithm (CRA), a polynomial-time algorithm based on a rigorously derived lower bound for the objective value, to efficiently determine the binary variables in the first stage, while the remaining linear programming problem in the second stage can be easily resolved. Using the CRA, we develop algorithms for both offline and online traffic allocation scenarios and validate them on real-world datasets from the cloud provider under study. In offline scenarios, our method delivers up to 66.34% cost savings compared to a commercial solver, while also significantly improving computational speed. Additionally, it achieves an average of 14% cost reduction over the current solution of the studied cloud provider. For online scenarios, we achieve an average cost-saving of 8.64% while staying within a 9% gap of the theoretical optimum.","PeriodicalId":13443,"journal":{"name":"IEEE/ACM Transactions on Networking","volume":"32 5","pages":"4254-4269"},"PeriodicalIF":3.0,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141506211","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Toward Improved Energy Fairness in CSMA-Based LoRaWAN 提高基于 CSMA 的 LoRaWAN 的能量公平性
IF 3 3区 计算机科学 Q2 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2024-06-28 DOI: 10.1109/TNET.2024.3418913
Chenglong Shao;Osamu Muta;Kazuya Tsukamoto;Wonjun Lee;Xianpeng Wang;Malvin Nkomo;Kapil R. Dandekar
This paper proposes a heterogeneous carrier-sense multiple access (CSMA) protocol named LoHEC as the first research attempt to improve energy fairness when applying CSMA to long-range wide area network (LoRaWAN). LoHEC is enabled by Channel Activity Detection (CAD), a recently introduced carrier-sensing technique to detect LoRaWAN signals even below the noise floor. The design of LoHEC is inspired by the fact that existing CAD-based CSMA proposals are in a homogeneous manner. In other words, they require LoRaWAN end devices to perform identical CAD regardless of the differences of their used network parameter – spreading factor (SF). This causes energy consumption imbalance among end devices since the consumed energy during CAD is significantly affected by SF. By considering the heterogeneity of LoRaWAN in terms of SF, LoHEC requires end devices to perform different numbers of CAD operations with different CAD intervals during channel access. Particularly, the number of needed CADs and CAD interval are determined based on the CAD energy consumption under different SFs. We conduct extensive experiments regarding LoHEC with a practical LoRaWAN testbed including 60 commercial off-the-shelf end devices. Experimental results show that in comparison with the existing solutions, LoHEC can achieve up to $0.85times $ improvement of the energy fairness on average.
本文提出了一种名为 LoHEC 的异构载波感知多路访问(CSMA)协议,作为将 CSMA 应用于远距离广域网(LoRaWAN)时提高能量公平性的首次研究尝试。LoHEC 由信道活动检测(CAD)实现,CAD 是最近推出的一种载波感应技术,可检测甚至低于本底噪声的 LoRaWAN 信号。LoHEC 的设计灵感来自于现有的基于 CAD 的 CSMA 提议是同质的。换句话说,它们要求 LoRaWAN 终端设备执行相同的 CAD,而不管它们使用的网络参数--传播因子(SF)--是否存在差异。这会导致终端设备之间的能耗不平衡,因为 CAD 期间消耗的能量受 SF 的影响很大。考虑到 LoRaWAN 在 SF 方面的异质性,LoHEC 要求终端设备在信道接入期间以不同的 CAD 间隔执行不同数量的 CAD 操作。特别是,需要根据不同 SF 下的 CAD 能量消耗来确定所需的 CAD 数量和 CAD 间隔。我们利用一个实际的 LoRaWAN 测试平台(包括 60 个现成的商用终端设备)对 LoHEC 进行了广泛的实验。实验结果表明,与现有解决方案相比,LoHEC 平均可实现 0.85 美元/次的能源公平性改进。
{"title":"Toward Improved Energy Fairness in CSMA-Based LoRaWAN","authors":"Chenglong Shao;Osamu Muta;Kazuya Tsukamoto;Wonjun Lee;Xianpeng Wang;Malvin Nkomo;Kapil R. Dandekar","doi":"10.1109/TNET.2024.3418913","DOIUrl":"10.1109/TNET.2024.3418913","url":null,"abstract":"This paper proposes a heterogeneous carrier-sense multiple access (CSMA) protocol named LoHEC as the first research attempt to improve energy fairness when applying CSMA to long-range wide area network (LoRaWAN). LoHEC is enabled by Channel Activity Detection (CAD), a recently introduced carrier-sensing technique to detect LoRaWAN signals even below the noise floor. The design of LoHEC is inspired by the fact that existing CAD-based CSMA proposals are in a homogeneous manner. In other words, they require LoRaWAN end devices to perform identical CAD regardless of the differences of their used network parameter – spreading factor (SF). This causes energy consumption imbalance among end devices since the consumed energy during CAD is significantly affected by SF. By considering the heterogeneity of LoRaWAN in terms of SF, LoHEC requires end devices to perform different numbers of CAD operations with different CAD intervals during channel access. Particularly, the number of needed CADs and CAD interval are determined based on the CAD energy consumption under different SFs. We conduct extensive experiments regarding LoHEC with a practical LoRaWAN testbed including 60 commercial off-the-shelf end devices. Experimental results show that in comparison with the existing solutions, LoHEC can achieve up to \u0000<inline-formula> <tex-math>$0.85times $ </tex-math></inline-formula>\u0000 improvement of the energy fairness on average.","PeriodicalId":13443,"journal":{"name":"IEEE/ACM Transactions on Networking","volume":"32 5","pages":"4382-4397"},"PeriodicalIF":3.0,"publicationDate":"2024-06-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141506212","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Personalized Pricing Through Strategic User Profiling in Social Networks 通过社交网络中的战略性用户分析实现个性化定价
IF 3 3区 计算机科学 Q2 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2024-06-26 DOI: 10.1109/TNET.2024.3410976
Qinqi Lin;Lingjie Duan;Jianwei Huang
Traditional user profiling techniques rely on browsing history or purchase records to identify users’ willingness to pay. This enables sellers to offer personalized prices to profiled users while charging only a uniform price to non-profiled users. However, the emergence of privacy-enhancing technologies has caused users to actively avoid on-site data tracking. Today, major online sellers have turned to public platforms such as online social networks to better track users’ profiles from their product-related discussions. This paper presents the first analytical study on how users should best manage their social activities against potential personalized pricing, and how a seller should strategically adjust her pricing scheme to facilitate user profiling in social networks. We formulate a dynamic Bayesian game played between the seller and users under asymmetric information. The key challenge of analyzing this game comes from the double couplings between the seller and the users as well as among the users. Furthermore, the equilibrium analysis needs to ensure consistency between users’ revealed information and the seller’s belief under random user profiling. We address these challenges by alternately applying backward and forward induction, and successfully characterize the unique perfect Bayesian equilibrium (PBE) in closed form. Our analysis reveals that as the accuracy of profiling technology improves, the seller tends to raise the equilibrium uniform price to motivate users’ increased social activities and facilitate user profiling. However, this results in most users being worse off after the informed consent policy is imposed to ensure users’ awareness of data access and profiling practices by potential sellers. This finding suggests that recent regulatory evolution towards enhancing users’ privacy awareness may have unintended consequences of reducing users’ payoffs. Finally, we examine prevalent pricing practices where the seller breaks a pricing promise to personalize final offerings, and show that it only slightly improves the seller’s average revenue while introducing higher variance.
传统的用户分析技术依靠浏览历史或购买记录来确定用户的支付意愿。这样,卖家就能向有特征的用户提供个性化的价格,而对没有特征的用户只收取统一的价格。然而,隐私增强技术的出现使用户开始主动避免现场数据跟踪。如今,各大网络卖家纷纷转向在线社交网络等公共平台,以便从用户与产品相关的讨论中更好地追踪用户资料。本文首次分析研究了用户应如何针对潜在的个性化定价对其社交活动进行最佳管理,以及卖家应如何战略性地调整其定价方案以促进社交网络中的用户特征分析。我们提出了一个在信息不对称条件下卖方和用户之间的动态贝叶斯博弈。分析这一博弈的关键挑战来自于卖方和用户之间以及用户之间的双重耦合。此外,均衡分析还需要确保用户的揭示信息与卖方在随机用户剖析下的信念之间的一致性。我们通过交替应用后向归纳法和前向归纳法来应对这些挑战,并成功地以封闭形式描述了唯一的完美贝叶斯均衡(PBE)。我们的分析表明,随着剖析技术精度的提高,卖方倾向于提高均衡统一价格,以激励用户增加社交活动并促进用户剖析。然而,在实施知情同意政策以确保用户了解潜在卖方的数据访问和特征分析行为后,大多数用户的情况会变得更糟。这一研究结果表明,近期监管部门在加强用户隐私意识方面的演变可能会带来意想不到的后果,即减少用户的回报。最后,我们研究了卖方违背定价承诺以个性化最终产品的普遍定价做法,结果表明,这种做法只能略微提高卖方的平均收入,同时带来更高的方差。
{"title":"Personalized Pricing Through Strategic User Profiling in Social Networks","authors":"Qinqi Lin;Lingjie Duan;Jianwei Huang","doi":"10.1109/TNET.2024.3410976","DOIUrl":"10.1109/TNET.2024.3410976","url":null,"abstract":"Traditional user profiling techniques rely on browsing history or purchase records to identify users’ willingness to pay. This enables sellers to offer personalized prices to profiled users while charging only a uniform price to non-profiled users. However, the emergence of privacy-enhancing technologies has caused users to actively avoid on-site data tracking. Today, major online sellers have turned to public platforms such as online social networks to better track users’ profiles from their product-related discussions. This paper presents the first analytical study on how users should best manage their social activities against potential personalized pricing, and how a seller should strategically adjust her pricing scheme to facilitate user profiling in social networks. We formulate a dynamic Bayesian game played between the seller and users under asymmetric information. The key challenge of analyzing this game comes from the double couplings between the seller and the users as well as among the users. Furthermore, the equilibrium analysis needs to ensure consistency between users’ revealed information and the seller’s belief under random user profiling. We address these challenges by alternately applying backward and forward induction, and successfully characterize the unique perfect Bayesian equilibrium (PBE) in closed form. Our analysis reveals that as the accuracy of profiling technology improves, the seller tends to raise the equilibrium uniform price to motivate users’ increased social activities and facilitate user profiling. However, this results in most users being worse off after the informed consent policy is imposed to ensure users’ awareness of data access and profiling practices by potential sellers. This finding suggests that recent regulatory evolution towards enhancing users’ privacy awareness may have unintended consequences of reducing users’ payoffs. Finally, we examine prevalent pricing practices where the seller breaks a pricing promise to personalize final offerings, and show that it only slightly improves the seller’s average revenue while introducing higher variance.","PeriodicalId":13443,"journal":{"name":"IEEE/ACM Transactions on Networking","volume":"32 5","pages":"3977-3992"},"PeriodicalIF":3.0,"publicationDate":"2024-06-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141506213","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Whittle Index-Based Q-Learning for Wireless Edge Caching With Linear Function Approximation 基于惠特尔索引的 Q-学习,用于线性函数逼近的无线边缘缓存
IF 3 3区 计算机科学 Q2 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2024-06-24 DOI: 10.1109/TNET.2024.3417351
Guojun Xiong;Shufan Wang;Jian Li;Rahul Singh
We consider the problem of content caching at the wireless edge to serve a set of end users via unreliable wireless channels so as to minimize the average latency experienced by end users due to the constrained wireless edge cache capacity. We formulate this problem as a Markov decision process, or more specifically a restless multi-armed bandit problem, which is provably hard to solve. We begin by investigating a discounted counterpart, and prove that it admits an optimal policy of the threshold-type. We then show that this result also holds for average latency problem. Using this structural result, we establish the indexability of our problem, and employ the Whittle index policy to minimize average latency. Since system parameters such as content request rates and wireless channel conditions are often unknown and time-varying, we further develop a model-free reinforcement learning algorithm dubbed as Q+-Whittle that relies on Whittle index policy. However, Q+-Whittle requires to store the Q-function values for all state-action pairs, the number of which can be extremely large for wireless edge caching. To this end, we approximate the Q-function by a parameterized function class with a much smaller dimension, and further design a Q+-Whittle algorithm with linear function approximation, which is called Q+-Whittle-LFA. We provide a finite-time bound on the mean-square error of Q+-Whittle-LFA. Simulation results using real traces demonstrate that Q+-Whittle-LFA yields excellent empirical performance.
我们考虑的问题是在无线边缘进行内容缓存,通过不可靠的无线信道为一组终端用户提供服务,从而最大限度地减少终端用户因无线边缘缓存容量受限而经历的平均延迟。我们将这一问题表述为一个马尔可夫决策过程,或者更具体地说是一个不安分的多臂强盗问题,这个问题很难解决。我们首先研究了一个贴现对应问题,并证明它允许一个阈值类型的最优策略。然后,我们证明这一结果也适用于平均延迟问题。利用这一结构性结果,我们建立了问题的可索引性,并采用惠特尔索引策略来最小化平均延迟。由于内容请求率和无线信道条件等系统参数通常是未知和时变的,我们进一步开发了一种无模型强化学习算法,称为 Q+-Whittle,它依赖于惠特尔索引策略。然而,Q+-Whittle 需要存储所有状态-动作对的 Q 函数值,而对于无线边缘缓存来说,Q 函数值的数量可能非常大。为此,我们用维度更小的参数化函数类来近似 Q 函数,并进一步设计了一种线性函数近似的 Q+-Whittle 算法,称为 Q+-Whittle-LFA。我们给出了 Q+-Whittle-LFA 均方误差的有限时间约束。使用真实轨迹的仿真结果表明,Q+-Whittle-LFA 具有出色的经验性能。
{"title":"Whittle Index-Based Q-Learning for Wireless Edge Caching With Linear Function Approximation","authors":"Guojun Xiong;Shufan Wang;Jian Li;Rahul Singh","doi":"10.1109/TNET.2024.3417351","DOIUrl":"10.1109/TNET.2024.3417351","url":null,"abstract":"We consider the problem of content caching at the wireless edge to serve a set of end users via unreliable wireless channels so as to minimize the average latency experienced by end users due to the constrained wireless edge cache capacity. We formulate this problem as a Markov decision process, or more specifically a restless multi-armed bandit problem, which is provably hard to solve. We begin by investigating a discounted counterpart, and prove that it admits an optimal policy of the threshold-type. We then show that this result also holds for average latency problem. Using this structural result, we establish the indexability of our problem, and employ the Whittle index policy to minimize average latency. Since system parameters such as content request rates and wireless channel conditions are often unknown and time-varying, we further develop a model-free reinforcement learning algorithm dubbed as \u0000<monospace>Q+-Whittle</monospace>\u0000 that relies on Whittle index policy. However, \u0000<monospace>Q+-Whittle</monospace>\u0000 requires to store the Q-function values for all state-action pairs, the number of which can be extremely large for wireless edge caching. To this end, we approximate the Q-function by a parameterized function class with a much smaller dimension, and further design a \u0000<monospace>Q+-Whittle</monospace>\u0000 algorithm with linear function approximation, which is called \u0000<monospace>Q+-Whittle-LFA</monospace>\u0000. We provide a finite-time bound on the mean-square error of \u0000<monospace>Q+-Whittle-LFA</monospace>\u0000. Simulation results using real traces demonstrate that \u0000<monospace>Q+-Whittle-LFA</monospace>\u0000 yields excellent empirical performance.","PeriodicalId":13443,"journal":{"name":"IEEE/ACM Transactions on Networking","volume":"32 5","pages":"4286-4301"},"PeriodicalIF":3.0,"publicationDate":"2024-06-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141529295","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
PARING: Joint Task Placement and Routing for Distributed Training With In-Network Aggregation PARING:利用网络内聚合进行分布式训练的联合任务分配和路由选择
IF 3 3区 计算机科学 Q2 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2024-06-20 DOI: 10.1109/TNET.2024.3414853
Yuhang Qiu;Gongming Zhao;Hongli Xu;He Huang;Chunming Qiao
With the increase in both the model size and dataset size of distributed training (DT) tasks, communication between the workers and parameter servers (PSs) in a cluster has become a bottleneck. In-network aggregation (INA) enabled by programmable switches has been proposed as a promising solution to alleviate the communication bottleneck. However, existing works focused on in-network aggregation implementation based on simple DT placement and fixed routing policies, which may lead to a large communication overhead and inefficient use of resources (e.g., storage, computing power and bandwidth). In this paper, we propose PARING, the first-of-its-kind INA approach that jointly optimizes DT task placement and routing in order to reduce traffic volume and minimize communication time. We formulate the problem as a nonlinear multi-objective mixed-integer programming problem, and prove its NP-Hardness. Based on the concept of Steiner trees, an algorithm with bounded approximation factors is proposed for this problem. Large-scale simulations show that our algorithm can reduce communication time by up to 81.0% and traffic volume by up to 19.1% compared to the state-of-the-art algorithms.
随着分布式训练(DT)任务的模型大小和数据集大小的增加,集群中的工作者和参数服务器(PS)之间的通信已成为一个瓶颈。由可编程交换机支持的网内聚合(INA)被认为是缓解通信瓶颈的一种有前途的解决方案。然而,现有的工作主要集中在基于简单的 DT 放置和固定路由策略的网内聚合实施上,这可能会导致较大的通信开销和资源(如存储、计算能力和带宽)的低效利用。在本文中,我们提出了 PARING,这是首创的 INA 方法,可联合优化 DT 任务放置和路由选择,以减少流量和通信时间。我们将该问题表述为非线性多目标混合整数编程问题,并证明了其 NP-Hardness。基于斯坦纳树的概念,我们提出了一种具有有界近似因子的算法。大规模仿真表明,与最先进的算法相比,我们的算法可将通信时间最多缩短 81.0%,流量最多减少 19.1%。
{"title":"PARING: Joint Task Placement and Routing for Distributed Training With In-Network Aggregation","authors":"Yuhang Qiu;Gongming Zhao;Hongli Xu;He Huang;Chunming Qiao","doi":"10.1109/TNET.2024.3414853","DOIUrl":"10.1109/TNET.2024.3414853","url":null,"abstract":"With the increase in both the model size and dataset size of distributed training (DT) tasks, communication between the workers and parameter servers (PSs) in a cluster has become a bottleneck. In-network aggregation (INA) enabled by programmable switches has been proposed as a promising solution to alleviate the communication bottleneck. However, existing works focused on in-network aggregation implementation based on simple DT placement and fixed routing policies, which may lead to a large communication overhead and inefficient use of resources (e.g., storage, computing power and bandwidth). In this paper, we propose PARING, the first-of-its-kind INA approach that jointly optimizes DT task placement and routing in order to reduce traffic volume and minimize communication time. We formulate the problem as a nonlinear multi-objective mixed-integer programming problem, and prove its NP-Hardness. Based on the concept of Steiner trees, an algorithm with bounded approximation factors is proposed for this problem. Large-scale simulations show that our algorithm can reduce communication time by up to 81.0% and traffic volume by up to 19.1% compared to the state-of-the-art algorithms.","PeriodicalId":13443,"journal":{"name":"IEEE/ACM Transactions on Networking","volume":"32 5","pages":"4317-4332"},"PeriodicalIF":3.0,"publicationDate":"2024-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141523081","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Toward Resource-Efficient and High- Performance Program Deployment in Programmable Networks 在可编程网络中实现资源高效和高性能的程序部署
IF 3 3区 计算机科学 Q2 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2024-06-20 DOI: 10.1109/TNET.2024.3413388
Hongyan Liu;Xiang Chen;Qun Huang;Guoqiang Sun;Peiqiao Wang;Dong Zhang;Chunming Wu;Xuan Liu;Qiang Yang
Programmable switches allow administrators to customize packet processing behaviors in data plane programs. However, existing solutions for program deployment fail to achieve resource efficiency and high packet processing performance. In this paper, we propose SPEED, a system that provides resource-efficient and high-performance deployment for data plane programs. For resource efficiency, SPEED merges input data plane programs by reducing program redundancy. Then it abstracts the substrate network into an one big switch (OBS), and deploys the merged program on the OBS while minimizing resource usage. For high performance, SPEED searches for the performance-optimal mapping between the OBS and the substrate network with respect to network-wide constraints. It also maintains program logic among different switches via inter-device packet scheduling. We have implemented SPEED on a Barefoot Tofino switch. The evaluation indicates that SPEED achieves resource-efficient and high-performance deployment for real data plane programs.
可编程交换机允许管理员在数据平面程序中自定义数据包处理行为。然而,现有的程序部署解决方案无法实现资源效率和高数据包处理性能。在本文中,我们提出了一个为数据平面程序提供资源效率和高性能部署的系统 SPEED。为了提高资源效率,SPEED 通过减少程序冗余来合并输入的数据平面程序。然后,它将基底网络抽象为一个大交换机(OBS),并在 OBS 上部署合并后的程序,同时尽量减少资源使用。为了获得高性能,SPEED 会根据全网范围的限制条件,在 OBS 和子网之间寻找性能最优的映射。它还通过设备间数据包调度在不同交换机之间维护程序逻辑。我们在 Barefoot Tofino 交换机上实现了 SPEED。评估结果表明,SPEED 实现了实际数据平面程序的资源高效和高性能部署。
{"title":"Toward Resource-Efficient and High- Performance Program Deployment in Programmable Networks","authors":"Hongyan Liu;Xiang Chen;Qun Huang;Guoqiang Sun;Peiqiao Wang;Dong Zhang;Chunming Wu;Xuan Liu;Qiang Yang","doi":"10.1109/TNET.2024.3413388","DOIUrl":"10.1109/TNET.2024.3413388","url":null,"abstract":"Programmable switches allow administrators to customize packet processing behaviors in data plane programs. However, existing solutions for program deployment fail to achieve resource efficiency and high packet processing performance. In this paper, we propose SPEED, a system that provides resource-efficient and high-performance deployment for data plane programs. For resource efficiency, SPEED merges input data plane programs by reducing program redundancy. Then it abstracts the substrate network into an one big switch (OBS), and deploys the merged program on the OBS while minimizing resource usage. For high performance, SPEED searches for the performance-optimal mapping between the OBS and the substrate network with respect to network-wide constraints. It also maintains program logic among different switches via inter-device packet scheduling. We have implemented SPEED on a Barefoot Tofino switch. The evaluation indicates that SPEED achieves resource-efficient and high-performance deployment for real data plane programs.","PeriodicalId":13443,"journal":{"name":"IEEE/ACM Transactions on Networking","volume":"32 5","pages":"4270-4285"},"PeriodicalIF":3.0,"publicationDate":"2024-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141523079","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
IEEE/ACM Transactions on Networking
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1