首页 > 最新文献

2022 IEEE 42nd International Conference on Distributed Computing Systems (ICDCS)最新文献

英文 中文
Computational Offloading for Non-Time-Critical Applications 非时间关键应用的计算卸载
Pub Date : 2022-07-01 DOI: 10.1109/ICDCS54860.2022.00124
Richard Patsch
The increasing demand for computational resources keeps outpacing available User Equipment (UE). To overcome intrinsic hardware limitations of UEs, computational offloading was proposed. The combination of UE and seemingly endless computational capacity in the cloud aims to cope with those limitations. Numerous frameworks leverage Edge Computing (EC) but a significant drawback of this is the required infrastructure. Some use cases however, do not benefit from lower response time and can remain in the cloud, where more potent resources are at one’s disposal. Main contributions are to determine computational demands, allocate serverless resources, partition code and integrate computational offloading into a modern software deployment process. By focusing on non-time-critical use cases, drawbacks of EC can be neglected to create a more developer-friendly approach. Originality lies in the resource allocation of serverless resources for such endeavours, appropriate deployment of partitions and integration into CI/CD pipelines. Methodology used will be Design Science Research. Thus, many iterations and proof-of-concept implementations yield knowledge and artefacts.
对计算资源日益增长的需求超过了可用的用户设备(UE)。为了克服ue固有的硬件限制,提出了计算卸载方法。UE和云中看似无穷无尽的计算能力的结合旨在应对这些限制。许多框架都利用边缘计算(EC),但它的一个显著缺点是需要基础设施。然而,有些用例不能从较短的响应时间中获益,并且可以保留在云中,在那里可以使用更强大的资源。主要贡献是确定计算需求、分配无服务器资源、分区代码和将计算卸载集成到现代软件部署过程中。通过关注非时间关键型用例,可以忽略EC的缺点,从而创建对开发人员更友好的方法。原创性在于无服务器资源的资源分配、分区的适当部署以及集成到CI/CD管道中。使用的方法将是设计科学研究。因此,许多迭代和概念验证实现产生了知识和工件。
{"title":"Computational Offloading for Non-Time-Critical Applications","authors":"Richard Patsch","doi":"10.1109/ICDCS54860.2022.00124","DOIUrl":"https://doi.org/10.1109/ICDCS54860.2022.00124","url":null,"abstract":"The increasing demand for computational resources keeps outpacing available User Equipment (UE). To overcome intrinsic hardware limitations of UEs, computational offloading was proposed. The combination of UE and seemingly endless computational capacity in the cloud aims to cope with those limitations. Numerous frameworks leverage Edge Computing (EC) but a significant drawback of this is the required infrastructure. Some use cases however, do not benefit from lower response time and can remain in the cloud, where more potent resources are at one’s disposal. Main contributions are to determine computational demands, allocate serverless resources, partition code and integrate computational offloading into a modern software deployment process. By focusing on non-time-critical use cases, drawbacks of EC can be neglected to create a more developer-friendly approach. Originality lies in the resource allocation of serverless resources for such endeavours, appropriate deployment of partitions and integration into CI/CD pipelines. Methodology used will be Design Science Research. Thus, many iterations and proof-of-concept implementations yield knowledge and artefacts.","PeriodicalId":225883,"journal":{"name":"2022 IEEE 42nd International Conference on Distributed Computing Systems (ICDCS)","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132352618","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
MoNet: A Fast Payment Channel Network for Scriptless Cryptocurrency Monero Monero:无脚本加密货币Monero的快速支付通道网络
Pub Date : 2022-07-01 DOI: 10.1109/ICDCS54860.2022.00035
Zhimei Sui, Joseph K. Liu, Jiangshan Yu, Xianrui Qin
We propose MoNet, the first bi-directional payment channel network with unlimited lifetime for Monero. It is fully compatible with Monero without requiring any modification of the current Monero blockchain. MoNet preserves transaction fungibility, i.e., transactions over MoNet and Monero are indistinguishable, and guarantees anonymity of Monero and MoNet users by avoiding any potential privacy leakage introduced by the new payment channel network. We also propose a new crypto primitive, named Verifiable Consecutive One-way Function (VCOF). It allows one to generate a sequence of statement-witness pairs in a consecutive and verifiable way, and these statement-witness pairs are one-way, namely it is easy to compute a statement-witness pair by knowing any of the pre-generated pairs, but hard in an opposite flow. By using VCOF, a signer can produce a series of consecutive adaptor signatures CAS. We further propose the generic construction of consecutive adaptor signature as an important building block of MoNet. We develop a proof-of-concept implementation for MoNet, and our evaluation shows that MoNet can reach the same transaction throughput as Lightning Network, the payment channel network for Bitcoin. Moreover, we provide a security analysis of MoNet under the Universal Composable (UC) security framework.
我们提出MoNet, Monero第一个双向支付通道网络,具有无限生命周期。它与门罗币完全兼容,不需要对当前的门罗币区块链进行任何修改。MoNet保留了交易的可替代性,即通过MoNet和Monero进行的交易是不可区分的,并通过避免新支付通道网络引入的任何潜在隐私泄露来保证Monero和MoNet用户的匿名性。我们还提出了一种新的密码原语,称为可验证连续单向函数(VCOF)。它允许以连续和可验证的方式生成一系列陈述-见证对,并且这些陈述-见证对是单向的,即通过知道预生成的任何对来计算陈述-见证对很容易,但在相反的流程中很难。通过使用VCOF,签名者可以生成一系列连续的适配器签名CAS。我们进一步提出了连续适配器签名的通用构造,作为莫奈的重要组成部分。我们为MoNet开发了一个概念验证实现,我们的评估表明,MoNet可以达到与比特币支付通道网络闪电网络相同的交易吞吐量。此外,我们还在通用可组合(UC)安全框架下对莫奈进行了安全性分析。
{"title":"MoNet: A Fast Payment Channel Network for Scriptless Cryptocurrency Monero","authors":"Zhimei Sui, Joseph K. Liu, Jiangshan Yu, Xianrui Qin","doi":"10.1109/ICDCS54860.2022.00035","DOIUrl":"https://doi.org/10.1109/ICDCS54860.2022.00035","url":null,"abstract":"We propose MoNet, the first bi-directional payment channel network with unlimited lifetime for Monero. It is fully compatible with Monero without requiring any modification of the current Monero blockchain. MoNet preserves transaction fungibility, i.e., transactions over MoNet and Monero are indistinguishable, and guarantees anonymity of Monero and MoNet users by avoiding any potential privacy leakage introduced by the new payment channel network. We also propose a new crypto primitive, named Verifiable Consecutive One-way Function (VCOF). It allows one to generate a sequence of statement-witness pairs in a consecutive and verifiable way, and these statement-witness pairs are one-way, namely it is easy to compute a statement-witness pair by knowing any of the pre-generated pairs, but hard in an opposite flow. By using VCOF, a signer can produce a series of consecutive adaptor signatures CAS. We further propose the generic construction of consecutive adaptor signature as an important building block of MoNet. We develop a proof-of-concept implementation for MoNet, and our evaluation shows that MoNet can reach the same transaction throughput as Lightning Network, the payment channel network for Bitcoin. Moreover, we provide a security analysis of MoNet under the Universal Composable (UC) security framework.","PeriodicalId":225883,"journal":{"name":"2022 IEEE 42nd International Conference on Distributed Computing Systems (ICDCS)","volume":"44 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130199076","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Awake-Efficient Distributed Algorithms for Maximal Independent Set 极大独立集的唤醒高效分布算法
Pub Date : 2022-07-01 DOI: 10.1109/ICDCS54860.2022.00153
Khalid Hourani, Gopal Pandurangan, Peter Robinson
We present a simple algorithmic framework for designing efficient distributed algorithms for the fundamental symmetry breaking problem of Maximal Independent Set (MIS) in the sleeping model [Chatterjee et al, PODC 2020]. In the sleeping model, only the rounds in which a node is awake are counted for the awake complexity, while sleeping rounds are ignored. This is motivated by the fact that a node spends resources only in its awake rounds and hence the goal is to minimize the awake complexity.Our framework allows us to design distributed MIS algorithms that have ${mathcal{O}}({text{polyloglog }}n)$ (worst-case) awake complexity in certain important graph classes which satisfy the so-called adjacency property. Informally, the adjacency property guarantees that the graph can be partitioned into an appropriate number of classes so that each node has at least one neighbor belonging to every class. Graphs that can satisfy the adjacency property are random graphs with large clustering coefficient such as random geometric graphs as well as line graphs of regular (or near regular) graphs.We first apply our framework to design two randomized distributed MIS algorithms for random geometric graphs of arbitrary dimension d (even non-constant). The first algorithm has ${mathcal{O}}({text{polyloglog }}n)$ (worst-case) awake complexity with high probability, where n is the number of nodes in the graph. 1 This means that any node in the network spends only ${mathcal{O}}({text{polyloglog }}n)$ awake rounds; this is almost exponentially better than the (traditional) time complexity of ${mathcal{O}}({text{log }}n)$ rounds (where there is no distinction between awake and sleeping rounds) known for distributed MIS algorithms on general graphs or even the faster ${mathcal{O}}left({sqrt {frac{{{text{log }}n}}{{{text{loglog }}n}}} }right)$ rounds known for Erdos-Renyi random graphs. However, the (traditional) time complexity of our first algorithm is quite large—essentially proportional to the degree of the graph. Our second algorithm has a slightly worse awake complexity of ${mathcal{O}}(d,{text{polyloglog }}n)$, but achieves a significantly better time complexity of ${mathcal{O}}(d,log n,{text{polyloglog }}n)$ rounds whp.We also show that our framework can be used to design ${mathcal{O}}({text{polyloglog }}n)$ awake complexity MIS algorithms in other types of random graphs, namely an augmented Erdos-Renyi random graph that has a large clustering coefficient.
我们提出了一个简单的算法框架,用于设计有效的分布式算法来解决睡眠模型中最大独立集(MIS)的基本对称性破缺问题[Chatterjee等人,PODC 2020]。在休眠模型中,只计算节点处于唤醒状态的轮数,而忽略休眠轮数。这是因为节点只在唤醒轮中花费资源,因此目标是最小化唤醒复杂度。我们的框架允许我们设计分布式MIS算法,该算法在满足所谓邻接性的某些重要图类中具有${mathcal{O}}({text{polyloglog }}n)$(最坏情况)唤醒复杂度。非正式地,邻接性属性保证图可以划分为适当数量的类,以便每个节点至少有一个属于每个类的邻居。满足邻接性的图是具有较大聚类系数的随机图,如随机几何图和规则(或近规则)图的线形图。我们首先应用我们的框架设计了两个随机分布的MIS算法,用于任意维d的随机几何图形(甚至是非恒定的)。第一种算法具有${mathcal{O}}({text{polyloglog }}n)$(最坏情况)高概率唤醒复杂度,其中n是图中的节点数。这意味着网络中的任何节点只花费${mathcal{O}}({text{polyloglog }}n)$唤醒轮;这比一般图上分布式MIS算法的(传统的)${mathcal{O}}({text{log }}n)$轮(没有清醒轮和睡眠轮的区别)的时间复杂度要高得多,甚至比Erdos-Renyi随机图上更快的${mathcal{O}}left({sqrt {frac{{{text{log }}n}}{{{text{loglog }}n}}} }right)$轮的时间复杂度还要高。然而,我们第一种算法的(传统的)时间复杂度相当大——本质上与图的程度成正比。我们的第二种算法唤醒复杂度为${mathcal{O}}(d,{text{polyloglog }}n)$稍差,但时间复杂度为${mathcal{O}}(d,log n,{text{polyloglog }}n)$ round whp。我们还表明,我们的框架可以用于在其他类型的随机图(即具有大聚类系数的增广Erdos-Renyi随机图)中设计${mathcal{O}}({text{polyloglog }}n)$ awake complexity MIS算法。
{"title":"Awake-Efficient Distributed Algorithms for Maximal Independent Set","authors":"Khalid Hourani, Gopal Pandurangan, Peter Robinson","doi":"10.1109/ICDCS54860.2022.00153","DOIUrl":"https://doi.org/10.1109/ICDCS54860.2022.00153","url":null,"abstract":"We present a simple algorithmic framework for designing efficient distributed algorithms for the fundamental symmetry breaking problem of Maximal Independent Set (MIS) in the sleeping model [Chatterjee et al, PODC 2020]. In the sleeping model, only the rounds in which a node is awake are counted for the awake complexity, while sleeping rounds are ignored. This is motivated by the fact that a node spends resources only in its awake rounds and hence the goal is to minimize the awake complexity.Our framework allows us to design distributed MIS algorithms that have ${mathcal{O}}({text{polyloglog }}n)$ (worst-case) awake complexity in certain important graph classes which satisfy the so-called adjacency property. Informally, the adjacency property guarantees that the graph can be partitioned into an appropriate number of classes so that each node has at least one neighbor belonging to every class. Graphs that can satisfy the adjacency property are random graphs with large clustering coefficient such as random geometric graphs as well as line graphs of regular (or near regular) graphs.We first apply our framework to design two randomized distributed MIS algorithms for random geometric graphs of arbitrary dimension d (even non-constant). The first algorithm has ${mathcal{O}}({text{polyloglog }}n)$ (worst-case) awake complexity with high probability, where n is the number of nodes in the graph. 1 This means that any node in the network spends only ${mathcal{O}}({text{polyloglog }}n)$ awake rounds; this is almost exponentially better than the (traditional) time complexity of ${mathcal{O}}({text{log }}n)$ rounds (where there is no distinction between awake and sleeping rounds) known for distributed MIS algorithms on general graphs or even the faster ${mathcal{O}}left({sqrt {frac{{{text{log }}n}}{{{text{loglog }}n}}} }right)$ rounds known for Erdos-Renyi random graphs. However, the (traditional) time complexity of our first algorithm is quite large—essentially proportional to the degree of the graph. Our second algorithm has a slightly worse awake complexity of ${mathcal{O}}(d,{text{polyloglog }}n)$, but achieves a significantly better time complexity of ${mathcal{O}}(d,log n,{text{polyloglog }}n)$ rounds whp.We also show that our framework can be used to design ${mathcal{O}}({text{polyloglog }}n)$ awake complexity MIS algorithms in other types of random graphs, namely an augmented Erdos-Renyi random graph that has a large clustering coefficient.","PeriodicalId":225883,"journal":{"name":"2022 IEEE 42nd International Conference on Distributed Computing Systems (ICDCS)","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133579812","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Curb: Trusted and Scalable Software-Defined Network Control Plane for Edge Computing 边缘计算的可信和可扩展软件定义网络控制平面
Pub Date : 2022-07-01 DOI: 10.1109/ICDCS54860.2022.00054
Minghui Xu, Chenxu Wang, Yifei Zou, Dongxiao Yu, Xiuzhen Cheng, Weifeng Lyu
The proliferation of edge computing brings new challenges due to the complexity of decentralized edge networks. Software-defined networking (SDN) takes advantage of pro-grammability and flexibility in handling complicated networks. However, it remains a problem of designing a both trusted and scalable SDN control plane, which is the core component of the SDN architecture for edge computing. In this paper, we propose Curb, a novel group-based SDN control plane that seamlessly integrates blockchain and BFT consensus to ensure byzantine fault tolerance, verifiability, traceability, and scalability within one framework. Curb supports trusted flow rule updates and adaptive controller reassignment. Importantly, we leverage a group-based control plane to realize a scalable network where the message complexity of each round is upper bounded by O(N), where N is the number of controllers, to reduce overheads caused by blockchain consensus. Finally, we conduct extensive simulations on the classical Internet2 network to validate our design.
由于分散的边缘网络的复杂性,边缘计算的扩散带来了新的挑战。软件定义网络(SDN)在处理复杂网络时具有可编程性和灵活性。然而,设计一个可信和可扩展的SDN控制平面仍然是一个问题,它是边缘计算SDN体系结构的核心组件。在本文中,我们提出了一种新的基于组的SDN控制平面Curb,它无缝地集成了区块链和BFT共识,以确保在一个框架内的拜占庭容错性、可验证性、可追溯性和可扩展性。遏制支持可信流规则更新和自适应控制器重新分配。重要的是,我们利用基于组的控制平面来实现一个可扩展的网络,其中每轮的消息复杂性上限为O(N),其中N是控制器的数量,以减少区块链共识造成的开销。最后,我们在经典的internet - 2网络上进行了大量的仿真来验证我们的设计。
{"title":"Curb: Trusted and Scalable Software-Defined Network Control Plane for Edge Computing","authors":"Minghui Xu, Chenxu Wang, Yifei Zou, Dongxiao Yu, Xiuzhen Cheng, Weifeng Lyu","doi":"10.1109/ICDCS54860.2022.00054","DOIUrl":"https://doi.org/10.1109/ICDCS54860.2022.00054","url":null,"abstract":"The proliferation of edge computing brings new challenges due to the complexity of decentralized edge networks. Software-defined networking (SDN) takes advantage of pro-grammability and flexibility in handling complicated networks. However, it remains a problem of designing a both trusted and scalable SDN control plane, which is the core component of the SDN architecture for edge computing. In this paper, we propose Curb, a novel group-based SDN control plane that seamlessly integrates blockchain and BFT consensus to ensure byzantine fault tolerance, verifiability, traceability, and scalability within one framework. Curb supports trusted flow rule updates and adaptive controller reassignment. Importantly, we leverage a group-based control plane to realize a scalable network where the message complexity of each round is upper bounded by O(N), where N is the number of controllers, to reduce overheads caused by blockchain consensus. Finally, we conduct extensive simulations on the classical Internet2 network to validate our design.","PeriodicalId":225883,"journal":{"name":"2022 IEEE 42nd International Conference on Distributed Computing Systems (ICDCS)","volume":"40 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133271091","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
A Novel Distributed Task Scheduling Framework for Supporting Vehicular Edge Intelligence 一种支持车辆边缘智能的分布式任务调度框架
Pub Date : 2022-07-01 DOI: 10.1109/ICDCS54860.2022.00098
Kun Yang, Peng Sun, Jieyu Lin, A. Boukerche, Liang Song
In recent years, data-driven intelligent transportation systems (ITS) have developed rapidly and brought various AI-assisted applications to improve traffic efficiency. However, these applications are constrained by their inherent high computing demand and the limitation of vehicular computing power. Vehicular edge computing (VEC) has shown great potential to support these applications by providing computing and storage capacity in close proximity. For facing the heterogeneous nature of in-vehicle applications and the highly dynamic network topology in the Internet-of-Vehicle (IoV) environment, how to achieve efficient scheduling of computational tasks is a critical problem. Accordingly, we design a two-layer distributed online task scheduling framework to maximize the task acceptance ratio (TAR) under various QoS requirements when facing unbalanced task distribution. Briefly, we implement the computation offloading and transmission scheduling policies for the vehicles to optimize the onboard computational task scheduling. Meanwhile, in the edge computing layer, a new distributed task dispatching policy is developed to maximize the utilization of system computing power and minimize the data transmission delay caused by vehicle motion. Through single-vehicle and multi-vehicle simulations, we evaluate the performance of our framework, and the experimental results show that our method outperforms the state-of-the-art algorithms. Moreover, we conduct ablation experiments to validate the effectiveness of our core algorithms.
近年来,数据驱动的智能交通系统(ITS)发展迅速,并带来了各种人工智能辅助应用,以提高交通效率。然而,这些应用受到其固有的高计算需求和车载计算能力的限制。车辆边缘计算(VEC)通过提供近距离的计算和存储容量,显示出支持这些应用的巨大潜力。面对车载应用的异构性和车联网环境下高度动态的网络拓扑结构,如何实现计算任务的高效调度是一个关键问题。因此,我们设计了一个两层分布式在线任务调度框架,以在面临任务分配不平衡的情况下,最大限度地提高各种QoS要求下的任务接受比(TAR)。简单地说,我们实现了车辆的计算卸载和传输调度策略,以优化车载计算任务调度。同时,在边缘计算层,提出了一种新的分布式任务调度策略,以最大限度地利用系统的计算能力,最大限度地减少车辆运动造成的数据传输延迟。通过单车和多车仿真,我们评估了我们的框架的性能,实验结果表明,我们的方法优于最先进的算法。此外,我们还进行了烧蚀实验来验证我们的核心算法的有效性。
{"title":"A Novel Distributed Task Scheduling Framework for Supporting Vehicular Edge Intelligence","authors":"Kun Yang, Peng Sun, Jieyu Lin, A. Boukerche, Liang Song","doi":"10.1109/ICDCS54860.2022.00098","DOIUrl":"https://doi.org/10.1109/ICDCS54860.2022.00098","url":null,"abstract":"In recent years, data-driven intelligent transportation systems (ITS) have developed rapidly and brought various AI-assisted applications to improve traffic efficiency. However, these applications are constrained by their inherent high computing demand and the limitation of vehicular computing power. Vehicular edge computing (VEC) has shown great potential to support these applications by providing computing and storage capacity in close proximity. For facing the heterogeneous nature of in-vehicle applications and the highly dynamic network topology in the Internet-of-Vehicle (IoV) environment, how to achieve efficient scheduling of computational tasks is a critical problem. Accordingly, we design a two-layer distributed online task scheduling framework to maximize the task acceptance ratio (TAR) under various QoS requirements when facing unbalanced task distribution. Briefly, we implement the computation offloading and transmission scheduling policies for the vehicles to optimize the onboard computational task scheduling. Meanwhile, in the edge computing layer, a new distributed task dispatching policy is developed to maximize the utilization of system computing power and minimize the data transmission delay caused by vehicle motion. Through single-vehicle and multi-vehicle simulations, we evaluate the performance of our framework, and the experimental results show that our method outperforms the state-of-the-art algorithms. Moreover, we conduct ablation experiments to validate the effectiveness of our core algorithms.","PeriodicalId":225883,"journal":{"name":"2022 IEEE 42nd International Conference on Distributed Computing Systems (ICDCS)","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133437808","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
ERA: Meeting the Fairness between Sender-driven and Receiver-driven Transmission Protocols in Data Center Networks ERA:满足数据中心网络中发送方驱动和接收方驱动传输协议的公平性
Pub Date : 2022-07-01 DOI: 10.1109/ICDCS54860.2022.00036
Sen Liu, F. Liang, Wei Yan, Zehua Guo, Xiang Lin, Yang Xu
The modern data centers require high throughput and low latency transmission to meet the demands of distributed applications on communication delay. Compared with traditional sender-driven try-and-back-off protocols (e.g., TCP and its variants), receiver-driven protocols (RDPs) achieve the ultra-low transmission latency by reacting to credits or tokens from receivers. However, RDPs face fairness challenges when coexisting with sender-driven protocols (SDPs) in multi-tenant data centers. Their flows barely survive during coexistence with SDP flows since the delicate scheduling of their credits is disrupted and overwhelmed by SDP data packets. To tackle this issue, we propose the Equivalent Rate Adaptor (ERA), a scheme that converts the proactive try-and-back-off mode of SDPs to an RDP-like credit-based reactive mode. ERA leverages the advertised window field in ACK headers at the receiver side to elaborately limit the number of the in-flight packets or bytes in SDPs and thus reduce their impacts on RDPs. Therefore, ERA not only ensures the fairness between two different types of protocols, but also maintains the low latency feature of RDPs. Moreover, ERA is lightweight, flexible, and transparent to tenants by embedding into the prevalent Open vSwitch in the public cloud. The evaluation of both test-bed and NS2 simulation shows that ERA enables SDP flows and RDP flows to maintain good throughput and share the bandwidth fairly, improving the bandwidth stolen by up to 94.29%.
现代数据中心要求高吞吐量、低时延传输,以满足分布式应用对通信时延的需求。与传统的发送方驱动的试退协议(如TCP及其变体)相比,接收方驱动的协议(rdp)通过对接收方的信用或令牌做出反应来实现超低的传输延迟。然而,在多租户数据中心中,rdp与发送方驱动协议(sdp)共存时面临公平性挑战。它们的流在与SDP流共存期间几乎无法生存,因为它们的信用的微妙调度被SDP数据包中断和淹没。为了解决这个问题,我们提出了等效速率适配器(ERA),这是一种将sdp的主动尝试-退回模式转换为类似rdp的基于信用的响应模式的方案。ERA利用接收端的ACK报头中发布的窗口字段来精确地限制sdp中正在运行的数据包或字节的数量,从而减少它们对rdp的影响。因此,ERA既保证了两种不同类型协议之间的公平性,又保持了rdp的低延迟特性。此外,通过嵌入公共云中流行的Open vSwitch, ERA轻量级、灵活且对租户透明。试验台和NS2仿真的评估表明,ERA能够使SDP流和RDP流保持良好的吞吐量并公平地共享带宽,将带宽窃取率提高了94.29%。
{"title":"ERA: Meeting the Fairness between Sender-driven and Receiver-driven Transmission Protocols in Data Center Networks","authors":"Sen Liu, F. Liang, Wei Yan, Zehua Guo, Xiang Lin, Yang Xu","doi":"10.1109/ICDCS54860.2022.00036","DOIUrl":"https://doi.org/10.1109/ICDCS54860.2022.00036","url":null,"abstract":"The modern data centers require high throughput and low latency transmission to meet the demands of distributed applications on communication delay. Compared with traditional sender-driven try-and-back-off protocols (e.g., TCP and its variants), receiver-driven protocols (RDPs) achieve the ultra-low transmission latency by reacting to credits or tokens from receivers. However, RDPs face fairness challenges when coexisting with sender-driven protocols (SDPs) in multi-tenant data centers. Their flows barely survive during coexistence with SDP flows since the delicate scheduling of their credits is disrupted and overwhelmed by SDP data packets. To tackle this issue, we propose the Equivalent Rate Adaptor (ERA), a scheme that converts the proactive try-and-back-off mode of SDPs to an RDP-like credit-based reactive mode. ERA leverages the advertised window field in ACK headers at the receiver side to elaborately limit the number of the in-flight packets or bytes in SDPs and thus reduce their impacts on RDPs. Therefore, ERA not only ensures the fairness between two different types of protocols, but also maintains the low latency feature of RDPs. Moreover, ERA is lightweight, flexible, and transparent to tenants by embedding into the prevalent Open vSwitch in the public cloud. The evaluation of both test-bed and NS2 simulation shows that ERA enables SDP flows and RDP flows to maintain good throughput and share the bandwidth fairly, improving the bandwidth stolen by up to 94.29%.","PeriodicalId":225883,"journal":{"name":"2022 IEEE 42nd International Conference on Distributed Computing Systems (ICDCS)","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122134502","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
AdaDrone: Quality of Navigation Based Neural Adaptive Scheduling for Edge-Assisted Drones AdaDrone:基于边缘辅助无人机导航质量的神经自适应调度
Pub Date : 2022-07-01 DOI: 10.1109/ICDCS54860.2022.00059
Haowei Chen, Liekang Zeng, Xiaoxi Zhang, Xu Chen
Accurate navigation is of paramount importance to ensure flight safety and efficiency for autonomous drones. Recent research starts to use Deep Neural Networks (DNN) to enhance drone navigation given their remarkable predictive capability for visual perception. However, existing solutions either run DNN inference tasks on drones in-situ, impeded by the limited onboard resource, or offload the computation to external servers which may incur large network latency. Few works consider jointly optimizing the offloading decisions along with image transmission configurations and adapting them on the fly. In this paper, we propose AdaDrone, an edge computing assisted drone navigation framework that can dynamically adjust task execution location, input resolution, and image compression ratio in order to achieve low inference latency, high prediction accuracy, and long flight distances. Specifically, we first augment state-of-the-art convolutional neural networks for drone navigation and define a novel metric called Quality of Navigation as our optimization objective which can effectively capture the above goals. We then design a deep reinforcement learning (DRL) based neural scheduler for which an information encoder is devised to reshape the state features and thus improve its learning ability. We finally implement a prototype of our framework wherein a drone board for navigation and scheduling control interacts with edge servers for task offloading and a simulator for performance evaluation. Extensive experimental results show that AdaDrone can reduce end-to-end latency by 28.06% and extend the flight distance by up to 27.28% compared with non-adaptive solutions.
准确导航是保证自主无人机安全高效飞行的关键。由于深度神经网络(DNN)具有显著的视觉感知预测能力,近年来的研究开始使用深度神经网络来增强无人机导航。然而,现有的解决方案要么在无人机上运行DNN推理任务,受到机载资源有限的阻碍,要么将计算卸载到外部服务器,这可能会导致巨大的网络延迟。很少有人考虑将卸载决策与图像传输配置一起进行优化,并对其进行动态调整。在本文中,我们提出了AdaDrone,这是一个边缘计算辅助无人机导航框架,可以动态调整任务执行位置,输入分辨率和图像压缩比,以实现低推理延迟,高预测精度和长距离飞行。具体来说,我们首先增强了用于无人机导航的最先进的卷积神经网络,并定义了一个称为导航质量的新度量作为我们的优化目标,它可以有效地捕获上述目标。然后,我们设计了一个基于深度强化学习(DRL)的神经调度程序,为其设计了一个信息编码器来重塑状态特征,从而提高其学习能力。我们最终实现了框架的原型,其中用于导航和调度控制的无人机板与用于任务卸载的边缘服务器和用于性能评估的模拟器进行交互。大量的实验结果表明,与非自适应解决方案相比,AdaDrone可将端到端延迟降低28.06%,将飞行距离延长27.28%。
{"title":"AdaDrone: Quality of Navigation Based Neural Adaptive Scheduling for Edge-Assisted Drones","authors":"Haowei Chen, Liekang Zeng, Xiaoxi Zhang, Xu Chen","doi":"10.1109/ICDCS54860.2022.00059","DOIUrl":"https://doi.org/10.1109/ICDCS54860.2022.00059","url":null,"abstract":"Accurate navigation is of paramount importance to ensure flight safety and efficiency for autonomous drones. Recent research starts to use Deep Neural Networks (DNN) to enhance drone navigation given their remarkable predictive capability for visual perception. However, existing solutions either run DNN inference tasks on drones in-situ, impeded by the limited onboard resource, or offload the computation to external servers which may incur large network latency. Few works consider jointly optimizing the offloading decisions along with image transmission configurations and adapting them on the fly. In this paper, we propose AdaDrone, an edge computing assisted drone navigation framework that can dynamically adjust task execution location, input resolution, and image compression ratio in order to achieve low inference latency, high prediction accuracy, and long flight distances. Specifically, we first augment state-of-the-art convolutional neural networks for drone navigation and define a novel metric called Quality of Navigation as our optimization objective which can effectively capture the above goals. We then design a deep reinforcement learning (DRL) based neural scheduler for which an information encoder is devised to reshape the state features and thus improve its learning ability. We finally implement a prototype of our framework wherein a drone board for navigation and scheduling control interacts with edge servers for task offloading and a simulator for performance evaluation. Extensive experimental results show that AdaDrone can reduce end-to-end latency by 28.06% and extend the flight distance by up to 27.28% compared with non-adaptive solutions.","PeriodicalId":225883,"journal":{"name":"2022 IEEE 42nd International Conference on Distributed Computing Systems (ICDCS)","volume":"144 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124097534","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
CODE: Compact IoT Data Collection with Precise Matrix Sampling and Efficient Inference 代码:紧凑的物联网数据收集与精确的矩阵采样和高效的推理
Pub Date : 2022-07-01 DOI: 10.1109/ICDCS54860.2022.00077
Huali Lu, Feng Lyu, Ju Ren, Jiadi Yu, Fan Wu, Yaoxue Zhang, X. Shen
It is unpractical to conduct full-size data collection in ubiquitous IoT data systems due to the energy constraints of IoT sensors and large system scales. Although sparse sensing technologies have been proposed to infer missing data based on partial sampled data, they usually focus on data inference while neglecting the sampling process, restraining the inference efficiency. In addition, their inferring methods highly depend on data linearity correlations, which become less effective when data are not linearly correlated. In this paper, we propose, Compact IOT Data CollEction, namely CODE, to conduct precise data matrix sampling and efficient inference. Particularly, CODE integrates two major components, i.e., cluster-based matrix sampling and Generative Adversarial Networks (GAN)-based matrix inference, to reduce the data collection cost and guarantee the data benefits, respectively. In the sampling component, a cluster-based sampling approach is devised, in which data clustering is first conducted and then a two-step sampling is performed in accordance with the number of clusters and clustering errors. For the inference component, a GAN-based model is developed to estimate the full matrix, which consists of a generator network that learns to generate a fake matrix, and a discriminator network that learns to discriminate the fake matrix from the real one. A reference implementation of CODE is conducted under three operational large-scale IoT systems, and extensive data-driven experiment results are provided to demonstrate its efficiency and robustness.
由于物联网传感器的能量限制和系统规模大,在无处不在的物联网数据系统中进行全尺寸数据采集是不现实的。虽然已经提出了基于部分采样数据推断缺失数据的稀疏感知技术,但它们通常只关注数据推理而忽略了采样过程,从而制约了推理效率。此外,他们的推断方法高度依赖于数据线性相关性,当数据不是线性相关时,这种方法的有效性就会降低。在本文中,我们提出了Compact IOT Data CollEction,即CODE,来进行精确的数据矩阵采样和高效的推理。特别是CODE集成了基于聚类的矩阵采样和基于生成式对抗网络(GAN)的矩阵推理两大组件,分别降低了数据采集成本和保证了数据效益。在采样部分,设计了基于聚类的采样方法,首先对数据进行聚类,然后根据聚类的数量和聚类误差进行两步采样。对于推理部分,开发了基于gan的全矩阵估计模型,该模型由学习生成假矩阵的生成器网络和学习区分假矩阵和真矩阵的判别器网络组成。在三个可操作的大型物联网系统中进行了CODE的参考实施,并提供了大量数据驱动的实验结果,以证明其效率和鲁棒性。
{"title":"CODE: Compact IoT Data Collection with Precise Matrix Sampling and Efficient Inference","authors":"Huali Lu, Feng Lyu, Ju Ren, Jiadi Yu, Fan Wu, Yaoxue Zhang, X. Shen","doi":"10.1109/ICDCS54860.2022.00077","DOIUrl":"https://doi.org/10.1109/ICDCS54860.2022.00077","url":null,"abstract":"It is unpractical to conduct full-size data collection in ubiquitous IoT data systems due to the energy constraints of IoT sensors and large system scales. Although sparse sensing technologies have been proposed to infer missing data based on partial sampled data, they usually focus on data inference while neglecting the sampling process, restraining the inference efficiency. In addition, their inferring methods highly depend on data linearity correlations, which become less effective when data are not linearly correlated. In this paper, we propose, Compact IOT Data CollEction, namely CODE, to conduct precise data matrix sampling and efficient inference. Particularly, CODE integrates two major components, i.e., cluster-based matrix sampling and Generative Adversarial Networks (GAN)-based matrix inference, to reduce the data collection cost and guarantee the data benefits, respectively. In the sampling component, a cluster-based sampling approach is devised, in which data clustering is first conducted and then a two-step sampling is performed in accordance with the number of clusters and clustering errors. For the inference component, a GAN-based model is developed to estimate the full matrix, which consists of a generator network that learns to generate a fake matrix, and a discriminator network that learns to discriminate the fake matrix from the real one. A reference implementation of CODE is conducted under three operational large-scale IoT systems, and extensive data-driven experiment results are provided to demonstrate its efficiency and robustness.","PeriodicalId":225883,"journal":{"name":"2022 IEEE 42nd International Conference on Distributed Computing Systems (ICDCS)","volume":"76 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124102000","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Towards Elasticity in Heterogeneous Edge-dense Environments 异构边缘密集环境中的弹性研究
Pub Date : 2022-07-01 DOI: 10.1109/ICDCS54860.2022.00046
Lei Huang, Zhiying Liang, N. Sreekumar, S. Kaushik, A. Chandra, J. Weissman
Edge computing has enabled a large set of emerging edge applications by exploiting data proximity and offloading computation-intensive workloads to nearby edge servers. However, supporting edge application users at scale poses challenges due to limited point-of-presence edge sites and constrained elasticity. In this paper, we introduce a densely-distributed edge resource model that leverages capacity-constrained volunteer edge nodes to support elastic computation offloading. Our model also enables the use of geo-distributed edge nodes to further support elasticity. Collectively, these features raise the issue of edge selection. We present a distributed edge selection approach that relies on client-centric views of available edge nodes to optimize average end-to-end latency, with considerations of system heterogeneity, resource contention and node churn. Elasticity is achieved by fine-grained performance probing, dynamic load balancing, and proactive multi-edge node connections per client. Evaluations are conducted in both real-world volunteer environments and emulated platforms to show how a common edge application, namely AR-based cognitive assistance, can benefit from our approach and deliver low-latency responses to distributed users at scale.
边缘计算通过利用数据接近性并将计算密集型工作负载卸载到附近的边缘服务器,实现了大量新兴边缘应用程序。然而,由于有限的存在点边缘站点和受限的弹性,大规模支持边缘应用程序用户带来了挑战。在本文中,我们引入了一种密集分布的边缘资源模型,该模型利用容量受限的志愿边缘节点来支持弹性计算卸载。我们的模型还支持使用地理分布的边缘节点来进一步支持弹性。总的来说,这些特征提出了边缘选择的问题。我们提出了一种分布式边缘选择方法,该方法依赖于可用边缘节点的以客户为中心的视图来优化平均端到端延迟,同时考虑到系统异质性、资源争用和节点流失。弹性是通过细粒度的性能探测、动态负载平衡和每个客户机的主动多边缘节点连接来实现的。在真实的志愿者环境和模拟平台中进行了评估,以展示一个常见的边缘应用程序,即基于ar的认知辅助,如何从我们的方法中受益,并为大规模的分布式用户提供低延迟的响应。
{"title":"Towards Elasticity in Heterogeneous Edge-dense Environments","authors":"Lei Huang, Zhiying Liang, N. Sreekumar, S. Kaushik, A. Chandra, J. Weissman","doi":"10.1109/ICDCS54860.2022.00046","DOIUrl":"https://doi.org/10.1109/ICDCS54860.2022.00046","url":null,"abstract":"Edge computing has enabled a large set of emerging edge applications by exploiting data proximity and offloading computation-intensive workloads to nearby edge servers. However, supporting edge application users at scale poses challenges due to limited point-of-presence edge sites and constrained elasticity. In this paper, we introduce a densely-distributed edge resource model that leverages capacity-constrained volunteer edge nodes to support elastic computation offloading. Our model also enables the use of geo-distributed edge nodes to further support elasticity. Collectively, these features raise the issue of edge selection. We present a distributed edge selection approach that relies on client-centric views of available edge nodes to optimize average end-to-end latency, with considerations of system heterogeneity, resource contention and node churn. Elasticity is achieved by fine-grained performance probing, dynamic load balancing, and proactive multi-edge node connections per client. Evaluations are conducted in both real-world volunteer environments and emulated platforms to show how a common edge application, namely AR-based cognitive assistance, can benefit from our approach and deliver low-latency responses to distributed users at scale.","PeriodicalId":225883,"journal":{"name":"2022 IEEE 42nd International Conference on Distributed Computing Systems (ICDCS)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128196202","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Supporting Real-time Networkwide T-Queries in High-speed Networks 在高速网络中支持实时全网t查询
Pub Date : 2022-07-01 DOI: 10.1109/ICDCS54860.2022.00010
Yuanda Wang, Haibo Wang, Chaoyi Ma, Shigang Chen
Traffic measurement is key to many important network functions. Supporting real-time queries at the individual flow level over networkwide traffic represents a major challenge that has not been successfully addressed yet. This paper provides the first solutions in supporting real-time networkwide queries and allowing a local network function (for performance, security or management purpose) to make queries at any measurement point at any time on any flow’s networkwide statistics, while the packets of the flow may traverse different paths in the network, some of which may not come across the point where the query is made. Our trace-based experiments demonstrate that the proposed solutions significantly outperform the baseline solutions derived from the existing techniques.
流量测量是实现许多重要网络功能的关键。在网络范围内的流量上支持单个流级别的实时查询是一个尚未成功解决的主要挑战。本文提供了第一个支持实时全网查询的解决方案,并允许本地网络功能(出于性能、安全或管理目的)在任何时间的任何测量点对任何流的全网统计数据进行查询,而流的数据包可能在网络中穿越不同的路径,其中一些可能不会遇到查询的点。我们基于跟踪的实验表明,所提出的解决方案明显优于现有技术衍生的基线解决方案。
{"title":"Supporting Real-time Networkwide T-Queries in High-speed Networks","authors":"Yuanda Wang, Haibo Wang, Chaoyi Ma, Shigang Chen","doi":"10.1109/ICDCS54860.2022.00010","DOIUrl":"https://doi.org/10.1109/ICDCS54860.2022.00010","url":null,"abstract":"Traffic measurement is key to many important network functions. Supporting real-time queries at the individual flow level over networkwide traffic represents a major challenge that has not been successfully addressed yet. This paper provides the first solutions in supporting real-time networkwide queries and allowing a local network function (for performance, security or management purpose) to make queries at any measurement point at any time on any flow’s networkwide statistics, while the packets of the flow may traverse different paths in the network, some of which may not come across the point where the query is made. Our trace-based experiments demonstrate that the proposed solutions significantly outperform the baseline solutions derived from the existing techniques.","PeriodicalId":225883,"journal":{"name":"2022 IEEE 42nd International Conference on Distributed Computing Systems (ICDCS)","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127440524","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
2022 IEEE 42nd International Conference on Distributed Computing Systems (ICDCS)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1