Pub Date : 2022-07-01DOI: 10.1109/ICDCS54860.2022.00124
Richard Patsch
The increasing demand for computational resources keeps outpacing available User Equipment (UE). To overcome intrinsic hardware limitations of UEs, computational offloading was proposed. The combination of UE and seemingly endless computational capacity in the cloud aims to cope with those limitations. Numerous frameworks leverage Edge Computing (EC) but a significant drawback of this is the required infrastructure. Some use cases however, do not benefit from lower response time and can remain in the cloud, where more potent resources are at one’s disposal. Main contributions are to determine computational demands, allocate serverless resources, partition code and integrate computational offloading into a modern software deployment process. By focusing on non-time-critical use cases, drawbacks of EC can be neglected to create a more developer-friendly approach. Originality lies in the resource allocation of serverless resources for such endeavours, appropriate deployment of partitions and integration into CI/CD pipelines. Methodology used will be Design Science Research. Thus, many iterations and proof-of-concept implementations yield knowledge and artefacts.
{"title":"Computational Offloading for Non-Time-Critical Applications","authors":"Richard Patsch","doi":"10.1109/ICDCS54860.2022.00124","DOIUrl":"https://doi.org/10.1109/ICDCS54860.2022.00124","url":null,"abstract":"The increasing demand for computational resources keeps outpacing available User Equipment (UE). To overcome intrinsic hardware limitations of UEs, computational offloading was proposed. The combination of UE and seemingly endless computational capacity in the cloud aims to cope with those limitations. Numerous frameworks leverage Edge Computing (EC) but a significant drawback of this is the required infrastructure. Some use cases however, do not benefit from lower response time and can remain in the cloud, where more potent resources are at one’s disposal. Main contributions are to determine computational demands, allocate serverless resources, partition code and integrate computational offloading into a modern software deployment process. By focusing on non-time-critical use cases, drawbacks of EC can be neglected to create a more developer-friendly approach. Originality lies in the resource allocation of serverless resources for such endeavours, appropriate deployment of partitions and integration into CI/CD pipelines. Methodology used will be Design Science Research. Thus, many iterations and proof-of-concept implementations yield knowledge and artefacts.","PeriodicalId":225883,"journal":{"name":"2022 IEEE 42nd International Conference on Distributed Computing Systems (ICDCS)","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132352618","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-07-01DOI: 10.1109/ICDCS54860.2022.00035
Zhimei Sui, Joseph K. Liu, Jiangshan Yu, Xianrui Qin
We propose MoNet, the first bi-directional payment channel network with unlimited lifetime for Monero. It is fully compatible with Monero without requiring any modification of the current Monero blockchain. MoNet preserves transaction fungibility, i.e., transactions over MoNet and Monero are indistinguishable, and guarantees anonymity of Monero and MoNet users by avoiding any potential privacy leakage introduced by the new payment channel network. We also propose a new crypto primitive, named Verifiable Consecutive One-way Function (VCOF). It allows one to generate a sequence of statement-witness pairs in a consecutive and verifiable way, and these statement-witness pairs are one-way, namely it is easy to compute a statement-witness pair by knowing any of the pre-generated pairs, but hard in an opposite flow. By using VCOF, a signer can produce a series of consecutive adaptor signatures CAS. We further propose the generic construction of consecutive adaptor signature as an important building block of MoNet. We develop a proof-of-concept implementation for MoNet, and our evaluation shows that MoNet can reach the same transaction throughput as Lightning Network, the payment channel network for Bitcoin. Moreover, we provide a security analysis of MoNet under the Universal Composable (UC) security framework.
{"title":"MoNet: A Fast Payment Channel Network for Scriptless Cryptocurrency Monero","authors":"Zhimei Sui, Joseph K. Liu, Jiangshan Yu, Xianrui Qin","doi":"10.1109/ICDCS54860.2022.00035","DOIUrl":"https://doi.org/10.1109/ICDCS54860.2022.00035","url":null,"abstract":"We propose MoNet, the first bi-directional payment channel network with unlimited lifetime for Monero. It is fully compatible with Monero without requiring any modification of the current Monero blockchain. MoNet preserves transaction fungibility, i.e., transactions over MoNet and Monero are indistinguishable, and guarantees anonymity of Monero and MoNet users by avoiding any potential privacy leakage introduced by the new payment channel network. We also propose a new crypto primitive, named Verifiable Consecutive One-way Function (VCOF). It allows one to generate a sequence of statement-witness pairs in a consecutive and verifiable way, and these statement-witness pairs are one-way, namely it is easy to compute a statement-witness pair by knowing any of the pre-generated pairs, but hard in an opposite flow. By using VCOF, a signer can produce a series of consecutive adaptor signatures CAS. We further propose the generic construction of consecutive adaptor signature as an important building block of MoNet. We develop a proof-of-concept implementation for MoNet, and our evaluation shows that MoNet can reach the same transaction throughput as Lightning Network, the payment channel network for Bitcoin. Moreover, we provide a security analysis of MoNet under the Universal Composable (UC) security framework.","PeriodicalId":225883,"journal":{"name":"2022 IEEE 42nd International Conference on Distributed Computing Systems (ICDCS)","volume":"44 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130199076","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-07-01DOI: 10.1109/ICDCS54860.2022.00153
Khalid Hourani, Gopal Pandurangan, Peter Robinson
We present a simple algorithmic framework for designing efficient distributed algorithms for the fundamental symmetry breaking problem of Maximal Independent Set (MIS) in the sleeping model [Chatterjee et al, PODC 2020]. In the sleeping model, only the rounds in which a node is awake are counted for the awake complexity, while sleeping rounds are ignored. This is motivated by the fact that a node spends resources only in its awake rounds and hence the goal is to minimize the awake complexity.Our framework allows us to design distributed MIS algorithms that have ${mathcal{O}}({text{polyloglog }}n)$ (worst-case) awake complexity in certain important graph classes which satisfy the so-called adjacency property. Informally, the adjacency property guarantees that the graph can be partitioned into an appropriate number of classes so that each node has at least one neighbor belonging to every class. Graphs that can satisfy the adjacency property are random graphs with large clustering coefficient such as random geometric graphs as well as line graphs of regular (or near regular) graphs.We first apply our framework to design two randomized distributed MIS algorithms for random geometric graphs of arbitrary dimension d (even non-constant). The first algorithm has ${mathcal{O}}({text{polyloglog }}n)$ (worst-case) awake complexity with high probability, where n is the number of nodes in the graph. 1 This means that any node in the network spends only ${mathcal{O}}({text{polyloglog }}n)$ awake rounds; this is almost exponentially better than the (traditional) time complexity of ${mathcal{O}}({text{log }}n)$ rounds (where there is no distinction between awake and sleeping rounds) known for distributed MIS algorithms on general graphs or even the faster ${mathcal{O}}left({sqrt {frac{{{text{log }}n}}{{{text{loglog }}n}}} }right)$ rounds known for Erdos-Renyi random graphs. However, the (traditional) time complexity of our first algorithm is quite large—essentially proportional to the degree of the graph. Our second algorithm has a slightly worse awake complexity of ${mathcal{O}}(d,{text{polyloglog }}n)$, but achieves a significantly better time complexity of ${mathcal{O}}(d,log n,{text{polyloglog }}n)$ rounds whp.We also show that our framework can be used to design ${mathcal{O}}({text{polyloglog }}n)$ awake complexity MIS algorithms in other types of random graphs, namely an augmented Erdos-Renyi random graph that has a large clustering coefficient.
{"title":"Awake-Efficient Distributed Algorithms for Maximal Independent Set","authors":"Khalid Hourani, Gopal Pandurangan, Peter Robinson","doi":"10.1109/ICDCS54860.2022.00153","DOIUrl":"https://doi.org/10.1109/ICDCS54860.2022.00153","url":null,"abstract":"We present a simple algorithmic framework for designing efficient distributed algorithms for the fundamental symmetry breaking problem of Maximal Independent Set (MIS) in the sleeping model [Chatterjee et al, PODC 2020]. In the sleeping model, only the rounds in which a node is awake are counted for the awake complexity, while sleeping rounds are ignored. This is motivated by the fact that a node spends resources only in its awake rounds and hence the goal is to minimize the awake complexity.Our framework allows us to design distributed MIS algorithms that have ${mathcal{O}}({text{polyloglog }}n)$ (worst-case) awake complexity in certain important graph classes which satisfy the so-called adjacency property. Informally, the adjacency property guarantees that the graph can be partitioned into an appropriate number of classes so that each node has at least one neighbor belonging to every class. Graphs that can satisfy the adjacency property are random graphs with large clustering coefficient such as random geometric graphs as well as line graphs of regular (or near regular) graphs.We first apply our framework to design two randomized distributed MIS algorithms for random geometric graphs of arbitrary dimension d (even non-constant). The first algorithm has ${mathcal{O}}({text{polyloglog }}n)$ (worst-case) awake complexity with high probability, where n is the number of nodes in the graph. 1 This means that any node in the network spends only ${mathcal{O}}({text{polyloglog }}n)$ awake rounds; this is almost exponentially better than the (traditional) time complexity of ${mathcal{O}}({text{log }}n)$ rounds (where there is no distinction between awake and sleeping rounds) known for distributed MIS algorithms on general graphs or even the faster ${mathcal{O}}left({sqrt {frac{{{text{log }}n}}{{{text{loglog }}n}}} }right)$ rounds known for Erdos-Renyi random graphs. However, the (traditional) time complexity of our first algorithm is quite large—essentially proportional to the degree of the graph. Our second algorithm has a slightly worse awake complexity of ${mathcal{O}}(d,{text{polyloglog }}n)$, but achieves a significantly better time complexity of ${mathcal{O}}(d,log n,{text{polyloglog }}n)$ rounds whp.We also show that our framework can be used to design ${mathcal{O}}({text{polyloglog }}n)$ awake complexity MIS algorithms in other types of random graphs, namely an augmented Erdos-Renyi random graph that has a large clustering coefficient.","PeriodicalId":225883,"journal":{"name":"2022 IEEE 42nd International Conference on Distributed Computing Systems (ICDCS)","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133579812","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The proliferation of edge computing brings new challenges due to the complexity of decentralized edge networks. Software-defined networking (SDN) takes advantage of pro-grammability and flexibility in handling complicated networks. However, it remains a problem of designing a both trusted and scalable SDN control plane, which is the core component of the SDN architecture for edge computing. In this paper, we propose Curb, a novel group-based SDN control plane that seamlessly integrates blockchain and BFT consensus to ensure byzantine fault tolerance, verifiability, traceability, and scalability within one framework. Curb supports trusted flow rule updates and adaptive controller reassignment. Importantly, we leverage a group-based control plane to realize a scalable network where the message complexity of each round is upper bounded by O(N), where N is the number of controllers, to reduce overheads caused by blockchain consensus. Finally, we conduct extensive simulations on the classical Internet2 network to validate our design.
{"title":"Curb: Trusted and Scalable Software-Defined Network Control Plane for Edge Computing","authors":"Minghui Xu, Chenxu Wang, Yifei Zou, Dongxiao Yu, Xiuzhen Cheng, Weifeng Lyu","doi":"10.1109/ICDCS54860.2022.00054","DOIUrl":"https://doi.org/10.1109/ICDCS54860.2022.00054","url":null,"abstract":"The proliferation of edge computing brings new challenges due to the complexity of decentralized edge networks. Software-defined networking (SDN) takes advantage of pro-grammability and flexibility in handling complicated networks. However, it remains a problem of designing a both trusted and scalable SDN control plane, which is the core component of the SDN architecture for edge computing. In this paper, we propose Curb, a novel group-based SDN control plane that seamlessly integrates blockchain and BFT consensus to ensure byzantine fault tolerance, verifiability, traceability, and scalability within one framework. Curb supports trusted flow rule updates and adaptive controller reassignment. Importantly, we leverage a group-based control plane to realize a scalable network where the message complexity of each round is upper bounded by O(N), where N is the number of controllers, to reduce overheads caused by blockchain consensus. Finally, we conduct extensive simulations on the classical Internet2 network to validate our design.","PeriodicalId":225883,"journal":{"name":"2022 IEEE 42nd International Conference on Distributed Computing Systems (ICDCS)","volume":"40 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133271091","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-07-01DOI: 10.1109/ICDCS54860.2022.00098
Kun Yang, Peng Sun, Jieyu Lin, A. Boukerche, Liang Song
In recent years, data-driven intelligent transportation systems (ITS) have developed rapidly and brought various AI-assisted applications to improve traffic efficiency. However, these applications are constrained by their inherent high computing demand and the limitation of vehicular computing power. Vehicular edge computing (VEC) has shown great potential to support these applications by providing computing and storage capacity in close proximity. For facing the heterogeneous nature of in-vehicle applications and the highly dynamic network topology in the Internet-of-Vehicle (IoV) environment, how to achieve efficient scheduling of computational tasks is a critical problem. Accordingly, we design a two-layer distributed online task scheduling framework to maximize the task acceptance ratio (TAR) under various QoS requirements when facing unbalanced task distribution. Briefly, we implement the computation offloading and transmission scheduling policies for the vehicles to optimize the onboard computational task scheduling. Meanwhile, in the edge computing layer, a new distributed task dispatching policy is developed to maximize the utilization of system computing power and minimize the data transmission delay caused by vehicle motion. Through single-vehicle and multi-vehicle simulations, we evaluate the performance of our framework, and the experimental results show that our method outperforms the state-of-the-art algorithms. Moreover, we conduct ablation experiments to validate the effectiveness of our core algorithms.
{"title":"A Novel Distributed Task Scheduling Framework for Supporting Vehicular Edge Intelligence","authors":"Kun Yang, Peng Sun, Jieyu Lin, A. Boukerche, Liang Song","doi":"10.1109/ICDCS54860.2022.00098","DOIUrl":"https://doi.org/10.1109/ICDCS54860.2022.00098","url":null,"abstract":"In recent years, data-driven intelligent transportation systems (ITS) have developed rapidly and brought various AI-assisted applications to improve traffic efficiency. However, these applications are constrained by their inherent high computing demand and the limitation of vehicular computing power. Vehicular edge computing (VEC) has shown great potential to support these applications by providing computing and storage capacity in close proximity. For facing the heterogeneous nature of in-vehicle applications and the highly dynamic network topology in the Internet-of-Vehicle (IoV) environment, how to achieve efficient scheduling of computational tasks is a critical problem. Accordingly, we design a two-layer distributed online task scheduling framework to maximize the task acceptance ratio (TAR) under various QoS requirements when facing unbalanced task distribution. Briefly, we implement the computation offloading and transmission scheduling policies for the vehicles to optimize the onboard computational task scheduling. Meanwhile, in the edge computing layer, a new distributed task dispatching policy is developed to maximize the utilization of system computing power and minimize the data transmission delay caused by vehicle motion. Through single-vehicle and multi-vehicle simulations, we evaluate the performance of our framework, and the experimental results show that our method outperforms the state-of-the-art algorithms. Moreover, we conduct ablation experiments to validate the effectiveness of our core algorithms.","PeriodicalId":225883,"journal":{"name":"2022 IEEE 42nd International Conference on Distributed Computing Systems (ICDCS)","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133437808","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-07-01DOI: 10.1109/ICDCS54860.2022.00036
Sen Liu, F. Liang, Wei Yan, Zehua Guo, Xiang Lin, Yang Xu
The modern data centers require high throughput and low latency transmission to meet the demands of distributed applications on communication delay. Compared with traditional sender-driven try-and-back-off protocols (e.g., TCP and its variants), receiver-driven protocols (RDPs) achieve the ultra-low transmission latency by reacting to credits or tokens from receivers. However, RDPs face fairness challenges when coexisting with sender-driven protocols (SDPs) in multi-tenant data centers. Their flows barely survive during coexistence with SDP flows since the delicate scheduling of their credits is disrupted and overwhelmed by SDP data packets. To tackle this issue, we propose the Equivalent Rate Adaptor (ERA), a scheme that converts the proactive try-and-back-off mode of SDPs to an RDP-like credit-based reactive mode. ERA leverages the advertised window field in ACK headers at the receiver side to elaborately limit the number of the in-flight packets or bytes in SDPs and thus reduce their impacts on RDPs. Therefore, ERA not only ensures the fairness between two different types of protocols, but also maintains the low latency feature of RDPs. Moreover, ERA is lightweight, flexible, and transparent to tenants by embedding into the prevalent Open vSwitch in the public cloud. The evaluation of both test-bed and NS2 simulation shows that ERA enables SDP flows and RDP flows to maintain good throughput and share the bandwidth fairly, improving the bandwidth stolen by up to 94.29%.
{"title":"ERA: Meeting the Fairness between Sender-driven and Receiver-driven Transmission Protocols in Data Center Networks","authors":"Sen Liu, F. Liang, Wei Yan, Zehua Guo, Xiang Lin, Yang Xu","doi":"10.1109/ICDCS54860.2022.00036","DOIUrl":"https://doi.org/10.1109/ICDCS54860.2022.00036","url":null,"abstract":"The modern data centers require high throughput and low latency transmission to meet the demands of distributed applications on communication delay. Compared with traditional sender-driven try-and-back-off protocols (e.g., TCP and its variants), receiver-driven protocols (RDPs) achieve the ultra-low transmission latency by reacting to credits or tokens from receivers. However, RDPs face fairness challenges when coexisting with sender-driven protocols (SDPs) in multi-tenant data centers. Their flows barely survive during coexistence with SDP flows since the delicate scheduling of their credits is disrupted and overwhelmed by SDP data packets. To tackle this issue, we propose the Equivalent Rate Adaptor (ERA), a scheme that converts the proactive try-and-back-off mode of SDPs to an RDP-like credit-based reactive mode. ERA leverages the advertised window field in ACK headers at the receiver side to elaborately limit the number of the in-flight packets or bytes in SDPs and thus reduce their impacts on RDPs. Therefore, ERA not only ensures the fairness between two different types of protocols, but also maintains the low latency feature of RDPs. Moreover, ERA is lightweight, flexible, and transparent to tenants by embedding into the prevalent Open vSwitch in the public cloud. The evaluation of both test-bed and NS2 simulation shows that ERA enables SDP flows and RDP flows to maintain good throughput and share the bandwidth fairly, improving the bandwidth stolen by up to 94.29%.","PeriodicalId":225883,"journal":{"name":"2022 IEEE 42nd International Conference on Distributed Computing Systems (ICDCS)","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122134502","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-07-01DOI: 10.1109/ICDCS54860.2022.00059
Haowei Chen, Liekang Zeng, Xiaoxi Zhang, Xu Chen
Accurate navigation is of paramount importance to ensure flight safety and efficiency for autonomous drones. Recent research starts to use Deep Neural Networks (DNN) to enhance drone navigation given their remarkable predictive capability for visual perception. However, existing solutions either run DNN inference tasks on drones in-situ, impeded by the limited onboard resource, or offload the computation to external servers which may incur large network latency. Few works consider jointly optimizing the offloading decisions along with image transmission configurations and adapting them on the fly. In this paper, we propose AdaDrone, an edge computing assisted drone navigation framework that can dynamically adjust task execution location, input resolution, and image compression ratio in order to achieve low inference latency, high prediction accuracy, and long flight distances. Specifically, we first augment state-of-the-art convolutional neural networks for drone navigation and define a novel metric called Quality of Navigation as our optimization objective which can effectively capture the above goals. We then design a deep reinforcement learning (DRL) based neural scheduler for which an information encoder is devised to reshape the state features and thus improve its learning ability. We finally implement a prototype of our framework wherein a drone board for navigation and scheduling control interacts with edge servers for task offloading and a simulator for performance evaluation. Extensive experimental results show that AdaDrone can reduce end-to-end latency by 28.06% and extend the flight distance by up to 27.28% compared with non-adaptive solutions.
{"title":"AdaDrone: Quality of Navigation Based Neural Adaptive Scheduling for Edge-Assisted Drones","authors":"Haowei Chen, Liekang Zeng, Xiaoxi Zhang, Xu Chen","doi":"10.1109/ICDCS54860.2022.00059","DOIUrl":"https://doi.org/10.1109/ICDCS54860.2022.00059","url":null,"abstract":"Accurate navigation is of paramount importance to ensure flight safety and efficiency for autonomous drones. Recent research starts to use Deep Neural Networks (DNN) to enhance drone navigation given their remarkable predictive capability for visual perception. However, existing solutions either run DNN inference tasks on drones in-situ, impeded by the limited onboard resource, or offload the computation to external servers which may incur large network latency. Few works consider jointly optimizing the offloading decisions along with image transmission configurations and adapting them on the fly. In this paper, we propose AdaDrone, an edge computing assisted drone navigation framework that can dynamically adjust task execution location, input resolution, and image compression ratio in order to achieve low inference latency, high prediction accuracy, and long flight distances. Specifically, we first augment state-of-the-art convolutional neural networks for drone navigation and define a novel metric called Quality of Navigation as our optimization objective which can effectively capture the above goals. We then design a deep reinforcement learning (DRL) based neural scheduler for which an information encoder is devised to reshape the state features and thus improve its learning ability. We finally implement a prototype of our framework wherein a drone board for navigation and scheduling control interacts with edge servers for task offloading and a simulator for performance evaluation. Extensive experimental results show that AdaDrone can reduce end-to-end latency by 28.06% and extend the flight distance by up to 27.28% compared with non-adaptive solutions.","PeriodicalId":225883,"journal":{"name":"2022 IEEE 42nd International Conference on Distributed Computing Systems (ICDCS)","volume":"144 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124097534","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-07-01DOI: 10.1109/ICDCS54860.2022.00077
Huali Lu, Feng Lyu, Ju Ren, Jiadi Yu, Fan Wu, Yaoxue Zhang, X. Shen
It is unpractical to conduct full-size data collection in ubiquitous IoT data systems due to the energy constraints of IoT sensors and large system scales. Although sparse sensing technologies have been proposed to infer missing data based on partial sampled data, they usually focus on data inference while neglecting the sampling process, restraining the inference efficiency. In addition, their inferring methods highly depend on data linearity correlations, which become less effective when data are not linearly correlated. In this paper, we propose, Compact IOT Data CollEction, namely CODE, to conduct precise data matrix sampling and efficient inference. Particularly, CODE integrates two major components, i.e., cluster-based matrix sampling and Generative Adversarial Networks (GAN)-based matrix inference, to reduce the data collection cost and guarantee the data benefits, respectively. In the sampling component, a cluster-based sampling approach is devised, in which data clustering is first conducted and then a two-step sampling is performed in accordance with the number of clusters and clustering errors. For the inference component, a GAN-based model is developed to estimate the full matrix, which consists of a generator network that learns to generate a fake matrix, and a discriminator network that learns to discriminate the fake matrix from the real one. A reference implementation of CODE is conducted under three operational large-scale IoT systems, and extensive data-driven experiment results are provided to demonstrate its efficiency and robustness.
由于物联网传感器的能量限制和系统规模大,在无处不在的物联网数据系统中进行全尺寸数据采集是不现实的。虽然已经提出了基于部分采样数据推断缺失数据的稀疏感知技术,但它们通常只关注数据推理而忽略了采样过程,从而制约了推理效率。此外,他们的推断方法高度依赖于数据线性相关性,当数据不是线性相关时,这种方法的有效性就会降低。在本文中,我们提出了Compact IOT Data CollEction,即CODE,来进行精确的数据矩阵采样和高效的推理。特别是CODE集成了基于聚类的矩阵采样和基于生成式对抗网络(GAN)的矩阵推理两大组件,分别降低了数据采集成本和保证了数据效益。在采样部分,设计了基于聚类的采样方法,首先对数据进行聚类,然后根据聚类的数量和聚类误差进行两步采样。对于推理部分,开发了基于gan的全矩阵估计模型,该模型由学习生成假矩阵的生成器网络和学习区分假矩阵和真矩阵的判别器网络组成。在三个可操作的大型物联网系统中进行了CODE的参考实施,并提供了大量数据驱动的实验结果,以证明其效率和鲁棒性。
{"title":"CODE: Compact IoT Data Collection with Precise Matrix Sampling and Efficient Inference","authors":"Huali Lu, Feng Lyu, Ju Ren, Jiadi Yu, Fan Wu, Yaoxue Zhang, X. Shen","doi":"10.1109/ICDCS54860.2022.00077","DOIUrl":"https://doi.org/10.1109/ICDCS54860.2022.00077","url":null,"abstract":"It is unpractical to conduct full-size data collection in ubiquitous IoT data systems due to the energy constraints of IoT sensors and large system scales. Although sparse sensing technologies have been proposed to infer missing data based on partial sampled data, they usually focus on data inference while neglecting the sampling process, restraining the inference efficiency. In addition, their inferring methods highly depend on data linearity correlations, which become less effective when data are not linearly correlated. In this paper, we propose, Compact IOT Data CollEction, namely CODE, to conduct precise data matrix sampling and efficient inference. Particularly, CODE integrates two major components, i.e., cluster-based matrix sampling and Generative Adversarial Networks (GAN)-based matrix inference, to reduce the data collection cost and guarantee the data benefits, respectively. In the sampling component, a cluster-based sampling approach is devised, in which data clustering is first conducted and then a two-step sampling is performed in accordance with the number of clusters and clustering errors. For the inference component, a GAN-based model is developed to estimate the full matrix, which consists of a generator network that learns to generate a fake matrix, and a discriminator network that learns to discriminate the fake matrix from the real one. A reference implementation of CODE is conducted under three operational large-scale IoT systems, and extensive data-driven experiment results are provided to demonstrate its efficiency and robustness.","PeriodicalId":225883,"journal":{"name":"2022 IEEE 42nd International Conference on Distributed Computing Systems (ICDCS)","volume":"76 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124102000","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-07-01DOI: 10.1109/ICDCS54860.2022.00046
Lei Huang, Zhiying Liang, N. Sreekumar, S. Kaushik, A. Chandra, J. Weissman
Edge computing has enabled a large set of emerging edge applications by exploiting data proximity and offloading computation-intensive workloads to nearby edge servers. However, supporting edge application users at scale poses challenges due to limited point-of-presence edge sites and constrained elasticity. In this paper, we introduce a densely-distributed edge resource model that leverages capacity-constrained volunteer edge nodes to support elastic computation offloading. Our model also enables the use of geo-distributed edge nodes to further support elasticity. Collectively, these features raise the issue of edge selection. We present a distributed edge selection approach that relies on client-centric views of available edge nodes to optimize average end-to-end latency, with considerations of system heterogeneity, resource contention and node churn. Elasticity is achieved by fine-grained performance probing, dynamic load balancing, and proactive multi-edge node connections per client. Evaluations are conducted in both real-world volunteer environments and emulated platforms to show how a common edge application, namely AR-based cognitive assistance, can benefit from our approach and deliver low-latency responses to distributed users at scale.
{"title":"Towards Elasticity in Heterogeneous Edge-dense Environments","authors":"Lei Huang, Zhiying Liang, N. Sreekumar, S. Kaushik, A. Chandra, J. Weissman","doi":"10.1109/ICDCS54860.2022.00046","DOIUrl":"https://doi.org/10.1109/ICDCS54860.2022.00046","url":null,"abstract":"Edge computing has enabled a large set of emerging edge applications by exploiting data proximity and offloading computation-intensive workloads to nearby edge servers. However, supporting edge application users at scale poses challenges due to limited point-of-presence edge sites and constrained elasticity. In this paper, we introduce a densely-distributed edge resource model that leverages capacity-constrained volunteer edge nodes to support elastic computation offloading. Our model also enables the use of geo-distributed edge nodes to further support elasticity. Collectively, these features raise the issue of edge selection. We present a distributed edge selection approach that relies on client-centric views of available edge nodes to optimize average end-to-end latency, with considerations of system heterogeneity, resource contention and node churn. Elasticity is achieved by fine-grained performance probing, dynamic load balancing, and proactive multi-edge node connections per client. Evaluations are conducted in both real-world volunteer environments and emulated platforms to show how a common edge application, namely AR-based cognitive assistance, can benefit from our approach and deliver low-latency responses to distributed users at scale.","PeriodicalId":225883,"journal":{"name":"2022 IEEE 42nd International Conference on Distributed Computing Systems (ICDCS)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128196202","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-07-01DOI: 10.1109/ICDCS54860.2022.00010
Yuanda Wang, Haibo Wang, Chaoyi Ma, Shigang Chen
Traffic measurement is key to many important network functions. Supporting real-time queries at the individual flow level over networkwide traffic represents a major challenge that has not been successfully addressed yet. This paper provides the first solutions in supporting real-time networkwide queries and allowing a local network function (for performance, security or management purpose) to make queries at any measurement point at any time on any flow’s networkwide statistics, while the packets of the flow may traverse different paths in the network, some of which may not come across the point where the query is made. Our trace-based experiments demonstrate that the proposed solutions significantly outperform the baseline solutions derived from the existing techniques.
{"title":"Supporting Real-time Networkwide T-Queries in High-speed Networks","authors":"Yuanda Wang, Haibo Wang, Chaoyi Ma, Shigang Chen","doi":"10.1109/ICDCS54860.2022.00010","DOIUrl":"https://doi.org/10.1109/ICDCS54860.2022.00010","url":null,"abstract":"Traffic measurement is key to many important network functions. Supporting real-time queries at the individual flow level over networkwide traffic represents a major challenge that has not been successfully addressed yet. This paper provides the first solutions in supporting real-time networkwide queries and allowing a local network function (for performance, security or management purpose) to make queries at any measurement point at any time on any flow’s networkwide statistics, while the packets of the flow may traverse different paths in the network, some of which may not come across the point where the query is made. Our trace-based experiments demonstrate that the proposed solutions significantly outperform the baseline solutions derived from the existing techniques.","PeriodicalId":225883,"journal":{"name":"2022 IEEE 42nd International Conference on Distributed Computing Systems (ICDCS)","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127440524","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}