Pub Date : 2021-07-01DOI: 10.1109/ICDCS51616.2021.00027
Jiacheng Shen, Tianyi Yang, Yuxin Su, Yangfan Zhou, Michael R. Lyu
Function-as-a-Service (FaaS) is becoming a prevalent paradigm in developing cloud applications. With FaaS, clients can develop applications as serverless functions, leaving the burden of resource management to cloud providers. However, FaaS platforms suffer from the performance degradation caused by the cold starts of serverless functions. Cold starts happen when serverless functions are invoked before they have been loaded into the memory. The problem is unavoidable because the memory in datacenters is typically too limited to hold all serverless functions simultaneously. The latency of cold function invocations will greatly degenerate the performance of FaaS platforms. Currently, FaaS platforms employ various scheduling methods to reduce the occurrences of cold starts. However, they do not consider the ubiquitous dependencies between serverless functions. Observing the potential of using dependencies to mitigate cold starts, we propose Defuse, a Dependency-guided Function Scheduler on FaaS platforms. Specifically, Defuse identifies two types of dependencies between serverless functions, i.e., strong dependencies and weak ones. It uses frequent pattern mining and positive point-wise mutual information to mine such dependencies respectively from function invocation histories. In this way, Defuse constructs a function dependency graph. The connected components (i.e., dependent functions) on the graph can be scheduled to diminish the occurrences of cold starts. We evaluate the effectiveness of Defuse by applying it to an industrial serverless dataset. The experimental results show that Defuse can reduce 22% of memory usage while having a 35% decrease in function cold-start rates compared with the state-of-the-art method.
{"title":"Defuse: A Dependency-Guided Function Scheduler to Mitigate Cold Starts on FaaS Platforms","authors":"Jiacheng Shen, Tianyi Yang, Yuxin Su, Yangfan Zhou, Michael R. Lyu","doi":"10.1109/ICDCS51616.2021.00027","DOIUrl":"https://doi.org/10.1109/ICDCS51616.2021.00027","url":null,"abstract":"Function-as-a-Service (FaaS) is becoming a prevalent paradigm in developing cloud applications. With FaaS, clients can develop applications as serverless functions, leaving the burden of resource management to cloud providers. However, FaaS platforms suffer from the performance degradation caused by the cold starts of serverless functions. Cold starts happen when serverless functions are invoked before they have been loaded into the memory. The problem is unavoidable because the memory in datacenters is typically too limited to hold all serverless functions simultaneously. The latency of cold function invocations will greatly degenerate the performance of FaaS platforms. Currently, FaaS platforms employ various scheduling methods to reduce the occurrences of cold starts. However, they do not consider the ubiquitous dependencies between serverless functions. Observing the potential of using dependencies to mitigate cold starts, we propose Defuse, a Dependency-guided Function Scheduler on FaaS platforms. Specifically, Defuse identifies two types of dependencies between serverless functions, i.e., strong dependencies and weak ones. It uses frequent pattern mining and positive point-wise mutual information to mine such dependencies respectively from function invocation histories. In this way, Defuse constructs a function dependency graph. The connected components (i.e., dependent functions) on the graph can be scheduled to diminish the occurrences of cold starts. We evaluate the effectiveness of Defuse by applying it to an industrial serverless dataset. The experimental results show that Defuse can reduce 22% of memory usage while having a 35% decrease in function cold-start rates compared with the state-of-the-art method.","PeriodicalId":222376,"journal":{"name":"2021 IEEE 41st International Conference on Distributed Computing Systems (ICDCS)","volume":"44 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125936788","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-07-01DOI: 10.1109/ICDCS51616.2021.00058
Stefan Schneider, Haydar Qarawlus, Holger Karl
Services often consist of multiple chained components such as microservices in a service mesh, or machine learning functions in a pipeline. Providing these services requires online coordination including scaling the service, placing instance of all components in the network, scheduling traffic to these instances, and routing traffic through the network. Optimized service coordination is still a hard problem due to many influencing factors such as rapidly arriving user demands and limited node and link capacity. Existing approaches to solve the problem are often built on rigid models and assumptions, tailored to specific scenarios. If the scenario changes and the assumptions no longer hold, they easily break and require manual adjustments by experts. Novel self-learning approaches using deep reinforcement learning (DRL) are promising but still have limitations as they only address simplified versions of the problem and are typically centralized and thus do not scale to practical large-scale networks. To address these issues, we propose a distributed self-learning service coordination approach using DRL. After centralized training, we deploy a distributed DRL agent at each node in the network, making fast coordination decisions locally in parallel with the other nodes. Each agent only observes its direct neighbors and does not need global knowledge. Hence, our approach scales independently from the size of the network. In our extensive evaluation using real-world network topologies and traffic traces, we show that our proposed approach outperforms a state-of-the-art conventional heuristic as well as a centralized DRL approach (60 % higher throughput on average) while requiring less time per online decision (1 ms).
{"title":"Distributed Online Service Coordination Using Deep Reinforcement Learning","authors":"Stefan Schneider, Haydar Qarawlus, Holger Karl","doi":"10.1109/ICDCS51616.2021.00058","DOIUrl":"https://doi.org/10.1109/ICDCS51616.2021.00058","url":null,"abstract":"Services often consist of multiple chained components such as microservices in a service mesh, or machine learning functions in a pipeline. Providing these services requires online coordination including scaling the service, placing instance of all components in the network, scheduling traffic to these instances, and routing traffic through the network. Optimized service coordination is still a hard problem due to many influencing factors such as rapidly arriving user demands and limited node and link capacity. Existing approaches to solve the problem are often built on rigid models and assumptions, tailored to specific scenarios. If the scenario changes and the assumptions no longer hold, they easily break and require manual adjustments by experts. Novel self-learning approaches using deep reinforcement learning (DRL) are promising but still have limitations as they only address simplified versions of the problem and are typically centralized and thus do not scale to practical large-scale networks. To address these issues, we propose a distributed self-learning service coordination approach using DRL. After centralized training, we deploy a distributed DRL agent at each node in the network, making fast coordination decisions locally in parallel with the other nodes. Each agent only observes its direct neighbors and does not need global knowledge. Hence, our approach scales independently from the size of the network. In our extensive evaluation using real-world network topologies and traffic traces, we show that our proposed approach outperforms a state-of-the-art conventional heuristic as well as a centralized DRL approach (60 % higher throughput on average) while requiring less time per online decision (1 ms).","PeriodicalId":222376,"journal":{"name":"2021 IEEE 41st International Conference on Distributed Computing Systems (ICDCS)","volume":"137 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124316501","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-07-01DOI: 10.1109/ICDCS51616.2021.00081
Wenqi Wei, Ling Liu, Yanzhao Wu, Gong Su, A. Iyengar
Federated learning(FL) is an emerging distributed learning paradigm with default client privacy because clients can keep sensitive data on their devices and only share local training parameter updates with the federated server. However, recent studies reveal that gradient leakages in FL may compromise the privacy of client training data. This paper presents a gradient leakage resilient approach to privacy-preserving federated learning with per training example-based client differential privacy, coined as Fed-CDP. It makes three original contributions. First, we identify three types of client gradient leakage threats in federated learning even with encrypted client-server communications. We articulate when and why the conventional server coordinated differential privacy approach, coined as Fed-SDP, is insufficient to protect the privacy of the training data. Second, we introduce Fed-CDP, the per example-based client differential privacy algorithm, and provide a formal analysis of Fed-CDP with the (∊,δ) differential privacy guarantee, and a formal comparison between Fed-CDP and Fed-SDP in terms of privacy accounting. Third, we formally analyze the privacy-utility tradeoff for providing differential privacy guarantee by Fed-CDP and present a dynamic decay noise-injection policy to further improve the accuracy and resiliency of Fed-CDP. We evaluate and compare Fed-CDP and Fed-CDP(decay) with Fed-SDP in terms of differential privacy guarantee and gradient leakage resilience over five benchmark datasets. The results show that the Fed-CDP approach outperforms conventional Fed-SDP in terms of resilience to client gradient leakages while offering competitive accuracy performance in federated learning.
{"title":"Gradient-Leakage Resilient Federated Learning","authors":"Wenqi Wei, Ling Liu, Yanzhao Wu, Gong Su, A. Iyengar","doi":"10.1109/ICDCS51616.2021.00081","DOIUrl":"https://doi.org/10.1109/ICDCS51616.2021.00081","url":null,"abstract":"Federated learning(FL) is an emerging distributed learning paradigm with default client privacy because clients can keep sensitive data on their devices and only share local training parameter updates with the federated server. However, recent studies reveal that gradient leakages in FL may compromise the privacy of client training data. This paper presents a gradient leakage resilient approach to privacy-preserving federated learning with per training example-based client differential privacy, coined as Fed-CDP. It makes three original contributions. First, we identify three types of client gradient leakage threats in federated learning even with encrypted client-server communications. We articulate when and why the conventional server coordinated differential privacy approach, coined as Fed-SDP, is insufficient to protect the privacy of the training data. Second, we introduce Fed-CDP, the per example-based client differential privacy algorithm, and provide a formal analysis of Fed-CDP with the (∊,δ) differential privacy guarantee, and a formal comparison between Fed-CDP and Fed-SDP in terms of privacy accounting. Third, we formally analyze the privacy-utility tradeoff for providing differential privacy guarantee by Fed-CDP and present a dynamic decay noise-injection policy to further improve the accuracy and resiliency of Fed-CDP. We evaluate and compare Fed-CDP and Fed-CDP(decay) with Fed-SDP in terms of differential privacy guarantee and gradient leakage resilience over five benchmark datasets. The results show that the Fed-CDP approach outperforms conventional Fed-SDP in terms of resilience to client gradient leakages while offering competitive accuracy performance in federated learning.","PeriodicalId":222376,"journal":{"name":"2021 IEEE 41st International Conference on Distributed Computing Systems (ICDCS)","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124369431","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-07-01DOI: 10.1109/ICDCS51616.2021.00031
Muhammad Saad, Songqing Chen, David A. Mohaisen
The Bitcoin network synchronization is crucial for its security against partitioning attacks. From 2014 to 2018, the Bitcoin network size has increased, while the percentage of synchronized nodes has decreased due to block propagation delay, which increases with the network size. However, in the last few months, the network synchronization has deteriorated despite a constant network size. The change in the synchronization pattern suggests that the network size is not the only factor in place, necessitating a root cause analysis of network synchronization. In this paper, we perform a root cause analysis to study four factors that affect network synchronization: the unreachable nodes, the addressing protocol, the information relaying protocol, and the network churn. Our study reveals that the unreachable nodes size is 24x the reachable network size. We also found that the network addressing protocol does not distinguish between reachable and unreachable nodes, leading to inefficiencies due to attempts to connect with unreachable nodes/addresses. We note that the outcome of this behavior is a low success rate of the outgoing connections, which reduces the average outdegree. Through measurements, we found malicious nodes that exploit this opportunity to flood the network with unreachable addresses. We also discovered that Bitcoin follows a round-robin relaying mechanism that adds a small delay in block propagation. Finally, we observe a high churn in the Bitcoin network where ≈8 % nodes leave the network every day. In the last few months the churn among synchronized nodes has doubled, which is likely the most dominant factor in decreasing network synchronization. Consolidating our insights, we propose improvements in Bitcoin Core to increase network synchronization.
{"title":"Root Cause Analyses for the Deteriorating Bitcoin Network Synchronization","authors":"Muhammad Saad, Songqing Chen, David A. Mohaisen","doi":"10.1109/ICDCS51616.2021.00031","DOIUrl":"https://doi.org/10.1109/ICDCS51616.2021.00031","url":null,"abstract":"The Bitcoin network synchronization is crucial for its security against partitioning attacks. From 2014 to 2018, the Bitcoin network size has increased, while the percentage of synchronized nodes has decreased due to block propagation delay, which increases with the network size. However, in the last few months, the network synchronization has deteriorated despite a constant network size. The change in the synchronization pattern suggests that the network size is not the only factor in place, necessitating a root cause analysis of network synchronization. In this paper, we perform a root cause analysis to study four factors that affect network synchronization: the unreachable nodes, the addressing protocol, the information relaying protocol, and the network churn. Our study reveals that the unreachable nodes size is 24x the reachable network size. We also found that the network addressing protocol does not distinguish between reachable and unreachable nodes, leading to inefficiencies due to attempts to connect with unreachable nodes/addresses. We note that the outcome of this behavior is a low success rate of the outgoing connections, which reduces the average outdegree. Through measurements, we found malicious nodes that exploit this opportunity to flood the network with unreachable addresses. We also discovered that Bitcoin follows a round-robin relaying mechanism that adds a small delay in block propagation. Finally, we observe a high churn in the Bitcoin network where ≈8 % nodes leave the network every day. In the last few months the churn among synchronized nodes has doubled, which is likely the most dominant factor in decreasing network synchronization. Consolidating our insights, we propose improvements in Bitcoin Core to increase network synchronization.","PeriodicalId":222376,"journal":{"name":"2021 IEEE 41st International Conference on Distributed Computing Systems (ICDCS)","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116885084","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-07-01DOI: 10.1109/ICDCS51616.2021.00105
Zichuan Xu, Dongqi Liu, W. Liang, Wenzheng Xu, Haipeng Dai, Qiufen Xia, Pan Zhou
Augmented Reality (AR) has various practical applications in healthcare, education, and entertainment. To provide a fully interactive and immersive experience, AR applications require extremely high responsiveness and ultra-low processing latency. Mobile edge computing (MEC) has shown great potential in meeting such stringent requirements and demands of AR applications by implementing AR requests in edge servers within the close proximity of these applications. In this paper, we investigate the problem of reward maximization for AR applications with uncertain demands in an MEC network, such that the reward of provisioning services for AR applications is maximized and the responsiveness of AR applications is enhanced, subject to both network resource capacity. We devise an exact solution for the problem if the problem size is small, otherwise we develop an efficient approximation algorithm with a provable approximation ratio for the problem. We also devise an online learning algorithm with a bounded regret for the dynamic reward maximization problem without the knowledge of the future arrivals of AR requests, by adopting the technique of Multi-Armed Bandits (MAB). We evaluate the performance of the proposed algorithms through simulations. Experimental results show that the proposed algorithms outperform existing studies by 17 % higher reward.
{"title":"Online Learning Algorithms for Offloading Augmented Reality Requests with Uncertain Demands in MECs","authors":"Zichuan Xu, Dongqi Liu, W. Liang, Wenzheng Xu, Haipeng Dai, Qiufen Xia, Pan Zhou","doi":"10.1109/ICDCS51616.2021.00105","DOIUrl":"https://doi.org/10.1109/ICDCS51616.2021.00105","url":null,"abstract":"Augmented Reality (AR) has various practical applications in healthcare, education, and entertainment. To provide a fully interactive and immersive experience, AR applications require extremely high responsiveness and ultra-low processing latency. Mobile edge computing (MEC) has shown great potential in meeting such stringent requirements and demands of AR applications by implementing AR requests in edge servers within the close proximity of these applications. In this paper, we investigate the problem of reward maximization for AR applications with uncertain demands in an MEC network, such that the reward of provisioning services for AR applications is maximized and the responsiveness of AR applications is enhanced, subject to both network resource capacity. We devise an exact solution for the problem if the problem size is small, otherwise we develop an efficient approximation algorithm with a provable approximation ratio for the problem. We also devise an online learning algorithm with a bounded regret for the dynamic reward maximization problem without the knowledge of the future arrivals of AR requests, by adopting the technique of Multi-Armed Bandits (MAB). We evaluate the performance of the proposed algorithms through simulations. Experimental results show that the proposed algorithms outperform existing studies by 17 % higher reward.","PeriodicalId":222376,"journal":{"name":"2021 IEEE 41st International Conference on Distributed Computing Systems (ICDCS)","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134101986","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-07-01DOI: 10.1109/ICDCS51616.2021.00047
Wanchun Jiang, Haoyang Li, Yulong Yan, Fa Ji, M. Jiang, Jianxin Wang, Tong Zhang
Nowadays, the distributed key-value stores have become the basic building block for large scale cloud applications. In large-scale distributed key-value stores, many key-value access operations, which will be processed in parallel on different servers, are usually generated for the data required by a single end-user request. Hence, the completion time of the end request is determined by the last completed key-value access operation. Accordingly, scheduling the order of key-value access operations of different end requests can effectively reduce their completion time, improving the user experience. However, existing algorithms are either hard to employ in distributed key-value stores due to the relatively large cooperation overhead for centralized information or unable to adapt to the time-varying load and server performance under different traffic patterns. In this paper, we first formalize the scheduling problem for small mean request completion time. As a step further, because of the NP-hardness of this problem, we heuristically design the distributed adaptive scheduler (DAS) for distributed key-value stores. DAS reduces the average request completion time by a distributed combination of the largest remaining processing time last and shortest remaining process time first algorithms. Moreover, DAS is adaptive to the time-varying server load and performance. Extensive simulations show that DAS reduces the mean request completion time by more than 15 ~ 50% compared to the default first come first served algorithm and outperforms the existing Rein-SBF algorithm under various scenarios.
{"title":"Cutting the Request Completion Time in Key-value Stores with Distributed Adaptive Scheduler","authors":"Wanchun Jiang, Haoyang Li, Yulong Yan, Fa Ji, M. Jiang, Jianxin Wang, Tong Zhang","doi":"10.1109/ICDCS51616.2021.00047","DOIUrl":"https://doi.org/10.1109/ICDCS51616.2021.00047","url":null,"abstract":"Nowadays, the distributed key-value stores have become the basic building block for large scale cloud applications. In large-scale distributed key-value stores, many key-value access operations, which will be processed in parallel on different servers, are usually generated for the data required by a single end-user request. Hence, the completion time of the end request is determined by the last completed key-value access operation. Accordingly, scheduling the order of key-value access operations of different end requests can effectively reduce their completion time, improving the user experience. However, existing algorithms are either hard to employ in distributed key-value stores due to the relatively large cooperation overhead for centralized information or unable to adapt to the time-varying load and server performance under different traffic patterns. In this paper, we first formalize the scheduling problem for small mean request completion time. As a step further, because of the NP-hardness of this problem, we heuristically design the distributed adaptive scheduler (DAS) for distributed key-value stores. DAS reduces the average request completion time by a distributed combination of the largest remaining processing time last and shortest remaining process time first algorithms. Moreover, DAS is adaptive to the time-varying server load and performance. Extensive simulations show that DAS reduces the mean request completion time by more than 15 ~ 50% compared to the default first come first served algorithm and outperforms the existing Rein-SBF algorithm under various scenarios.","PeriodicalId":222376,"journal":{"name":"2021 IEEE 41st International Conference on Distributed Computing Systems (ICDCS)","volume":"116 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128074975","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-07-01DOI: 10.1109/ICDCS51616.2021.00124
Cheng Lin, Qinpei Zhao, Weixiong Rao
Content-based Pub/Sub paradigm has been widely used in many distributed applications and existing approaches suffer from high redundancy subscription index structure and low matching efficiency. To tackle this issue, in this paper, we propose a learning framework to guide the construction of an efficient in-memory subscription index, namely PMIndex, via a multi-task learning framework. The key of PMIndex is to merge redundant subscriptions into an optimal number of partitions for less memory cost and faster matching time. Our initial experimental result on a synthetic dataset demonstrates that PMindex outperforms two state-of-the-arts by faster matching time and less memory cost.
{"title":"Poster: Learning Index on Content-based Pub/Sub","authors":"Cheng Lin, Qinpei Zhao, Weixiong Rao","doi":"10.1109/ICDCS51616.2021.00124","DOIUrl":"https://doi.org/10.1109/ICDCS51616.2021.00124","url":null,"abstract":"Content-based Pub/Sub paradigm has been widely used in many distributed applications and existing approaches suffer from high redundancy subscription index structure and low matching efficiency. To tackle this issue, in this paper, we propose a learning framework to guide the construction of an efficient in-memory subscription index, namely PMIndex, via a multi-task learning framework. The key of PMIndex is to merge redundant subscriptions into an optimal number of partitions for less memory cost and faster matching time. Our initial experimental result on a synthetic dataset demonstrates that PMindex outperforms two state-of-the-arts by faster matching time and less memory cost.","PeriodicalId":222376,"journal":{"name":"2021 IEEE 41st International Conference on Distributed Computing Systems (ICDCS)","volume":"59 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116338553","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-07-01DOI: 10.1109/ICDCS51616.2021.00116
C. Gadea, B. Ionescu, D. Ionescu
Operational Transformation (OT) algorithms, at the heart of web-based collaboration, have been studied since the late 1980s and remain a hot research subject. Centralized versions of OT algorithms that can be implemented on top of cloud-based serverless platforms are still unexplored. This poster introduces a Control Loop view of OT algorithms that are modeled by a series of Finite State Automata (FSAs) embedded in a serverless system architecture. A series of nested Finite State Machines (FSMs) dynamically control the co-editing processes. The proposed platform was simulated to demonstrate the correctness of the OT algorithms. Results obtained from the simulation are presented and an interactive demonstration is given.
{"title":"Demo: A FSM Approach to Web Collaboration","authors":"C. Gadea, B. Ionescu, D. Ionescu","doi":"10.1109/ICDCS51616.2021.00116","DOIUrl":"https://doi.org/10.1109/ICDCS51616.2021.00116","url":null,"abstract":"Operational Transformation (OT) algorithms, at the heart of web-based collaboration, have been studied since the late 1980s and remain a hot research subject. Centralized versions of OT algorithms that can be implemented on top of cloud-based serverless platforms are still unexplored. This poster introduces a Control Loop view of OT algorithms that are modeled by a series of Finite State Automata (FSAs) embedded in a serverless system architecture. A series of nested Finite State Machines (FSMs) dynamically control the co-editing processes. The proposed platform was simulated to demonstrate the correctness of the OT algorithms. Results obtained from the simulation are presented and an interactive demonstration is given.","PeriodicalId":222376,"journal":{"name":"2021 IEEE 41st International Conference on Distributed Computing Systems (ICDCS)","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125567204","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-07-01DOI: 10.1109/ICDCS51616.2021.00066
Huawei Huang, Zhenyi Huang, Xiaowen Peng, Zibin Zheng, Song Guo
In a large-scale sharded blockchain, transactions are processed by a number of parallel committees collaboratively. Thus, the blockchain throughput can be strongly boosted. A problem is that some groups of blockchain nodes consume large latency to form committees at the beginning of each epoch. Furthermore, the heterogeneous processing capabilities of different committees also result in unbalanced consensus latency. Such unbalanced two-phase latency brings a large cumulative age to the transactions waited in the final committee. Consequently, the blockchain throughput can be significantly degraded because of the large transaction's cumulative age. We believe that a good committee-scheduling strategy can reduce the cumulative age, and thus benefit the blockchain throughput. However, we have not yet found a committee-scheduling scheme that works for accelerating block formation in the context of blockchain sharding. To this end, this paper studies a fine-balanced tradeoff between the transaction's throughput and their cumulative age in a large-scale sharded blockchain. We formulate this tradeoff as a utility-maximization problem, which is proved NP-hard. To solve this problem, we propose an online distributed Stochastic-Exploration (SE) algorithm, which guarantees a near-optimal system utility. The theoretical convergence time of the proposed algorithm as well as the performance perturbation brought by the committee's failure are also analyzed rigorously. We then evaluate the proposed algorithm using the dataset of blockchain-sharding transactions. The simulation results demonstrate that the proposed SE algorithm shows an overwhelming better performance comparing with other baselines in terms of both system utility and the contributing degree while processing shard transactions.
{"title":"MVCom: Scheduling Most Valuable Committees for the Large-Scale Sharded Blockchain","authors":"Huawei Huang, Zhenyi Huang, Xiaowen Peng, Zibin Zheng, Song Guo","doi":"10.1109/ICDCS51616.2021.00066","DOIUrl":"https://doi.org/10.1109/ICDCS51616.2021.00066","url":null,"abstract":"In a large-scale sharded blockchain, transactions are processed by a number of parallel committees collaboratively. Thus, the blockchain throughput can be strongly boosted. A problem is that some groups of blockchain nodes consume large latency to form committees at the beginning of each epoch. Furthermore, the heterogeneous processing capabilities of different committees also result in unbalanced consensus latency. Such unbalanced two-phase latency brings a large cumulative age to the transactions waited in the final committee. Consequently, the blockchain throughput can be significantly degraded because of the large transaction's cumulative age. We believe that a good committee-scheduling strategy can reduce the cumulative age, and thus benefit the blockchain throughput. However, we have not yet found a committee-scheduling scheme that works for accelerating block formation in the context of blockchain sharding. To this end, this paper studies a fine-balanced tradeoff between the transaction's throughput and their cumulative age in a large-scale sharded blockchain. We formulate this tradeoff as a utility-maximization problem, which is proved NP-hard. To solve this problem, we propose an online distributed Stochastic-Exploration (SE) algorithm, which guarantees a near-optimal system utility. The theoretical convergence time of the proposed algorithm as well as the performance perturbation brought by the committee's failure are also analyzed rigorously. We then evaluate the proposed algorithm using the dataset of blockchain-sharding transactions. The simulation results demonstrate that the proposed SE algorithm shows an overwhelming better performance comparing with other baselines in terms of both system utility and the contributing degree while processing shard transactions.","PeriodicalId":222376,"journal":{"name":"2021 IEEE 41st International Conference on Distributed Computing Systems (ICDCS)","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116057064","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-07-01DOI: 10.1109/ICDCS51616.2021.00035
Abdulrahman Bin Rabiah, Yugarshi Shashwat, Fatemah Alharbi, Silas Richelson, N. Abu-Ghazaleh
Signature-based authentication is a core cryptographic primitive essential for most secure networking protocols. We introduce a new signature scheme, MSS, that allows a client to efficiently authenticate herself to a server. We model our new scheme in an offline/online model where client online time is premium. The offline component derives basis signatures that are then composed based on the data being signed to provide signatures efficiently and securely during run-time. MSS requires the server to maintain state and is suitable for applications where a device has long-term associations with the server. MSS allows direct comparison to hash chains-based authentication schemes used in similar settings, and is relevant to resource-constrained devices e.g., IoT. We derive MSS instantiations for two cryptographic families, assuming the hardness of RSA and decisional Diffie-Hellman (DDH) respectively, demonstrating the generality of the idea. We then use our new scheme to design an efficient time-based one-time password (TOTP) protocol. Specifically, we implement two TOTP authentication systems from our RSA and DDH instantiations. We evaluate the TOTP implementations on Raspberry Pis which demonstrate appealing gains: MSS reduces authentication latency and energy consumption by a factor of ~82 and 792, respectively, compared to a recent hash chain-based TOTP system.
{"title":"MSS: Lightweight network authentication for resource constrained devices via Mergeable Stateful Signatures","authors":"Abdulrahman Bin Rabiah, Yugarshi Shashwat, Fatemah Alharbi, Silas Richelson, N. Abu-Ghazaleh","doi":"10.1109/ICDCS51616.2021.00035","DOIUrl":"https://doi.org/10.1109/ICDCS51616.2021.00035","url":null,"abstract":"Signature-based authentication is a core cryptographic primitive essential for most secure networking protocols. We introduce a new signature scheme, MSS, that allows a client to efficiently authenticate herself to a server. We model our new scheme in an offline/online model where client online time is premium. The offline component derives basis signatures that are then composed based on the data being signed to provide signatures efficiently and securely during run-time. MSS requires the server to maintain state and is suitable for applications where a device has long-term associations with the server. MSS allows direct comparison to hash chains-based authentication schemes used in similar settings, and is relevant to resource-constrained devices e.g., IoT. We derive MSS instantiations for two cryptographic families, assuming the hardness of RSA and decisional Diffie-Hellman (DDH) respectively, demonstrating the generality of the idea. We then use our new scheme to design an efficient time-based one-time password (TOTP) protocol. Specifically, we implement two TOTP authentication systems from our RSA and DDH instantiations. We evaluate the TOTP implementations on Raspberry Pis which demonstrate appealing gains: MSS reduces authentication latency and energy consumption by a factor of ~82 and 792, respectively, compared to a recent hash chain-based TOTP system.","PeriodicalId":222376,"journal":{"name":"2021 IEEE 41st International Conference on Distributed Computing Systems (ICDCS)","volume":"52 4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116837077","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}