Mobile edge computing (MEC) is envisioned as an enabling technology for extreme low-latency services in the next generation 5G access networks. In a 5G-enabled MEC, computing resources are attached to base stations. In this way, network service providers can cache their services from remote data centers to base stations in the MEC to serve user tasks in their close proximity, thereby reducing the service latency. However, mobile users usually have various dynamic hidden features, such as their locations, user group tags, and mobility patterns. Such hidden features normally lead to uncertainties of the 5G-enabled MEC, such as user demand and processing delay. This poses significant challenges for the service caching and task offloading in a 5G-enabled MEC. In this paper, we investigate the problem of dynamic service caching and task offloading in a 5G-enabled MEC with user demand and processing delay uncertainties. We first propose an online learning algorithm for the problem with given user demands by utilizing the technique of Multi-Armed Bandits (MAB), and theoretically analyze the regret bound of the algorithm. We also propose a novel architecture of Generative Adversarial Networks (GAN) to accurately predict the user demands based on small samples of hidden features of mobile users. Based on the proposed GAN model, we then devise an efficient heuristic for the problem with the uncertainties of both user demand and processing delay. We finally evaluate the performance of the proposed algorithms by simulations based on a realistic dataset of user data. Experiment results show that the performance of the proposed algorithms outperform existing algorithms by around 15%.
{"title":"Learning for Exception: Dynamic Service Caching in 5G-Enabled MECs with Bursty User Demands","authors":"Zichuan Xu, Shengnan Wang, Shipei Liu, Haipeng Dai, Qiufen Xia, W. Liang, Guowei Wu","doi":"10.1109/ICDCS47774.2020.00098","DOIUrl":"https://doi.org/10.1109/ICDCS47774.2020.00098","url":null,"abstract":"Mobile edge computing (MEC) is envisioned as an enabling technology for extreme low-latency services in the next generation 5G access networks. In a 5G-enabled MEC, computing resources are attached to base stations. In this way, network service providers can cache their services from remote data centers to base stations in the MEC to serve user tasks in their close proximity, thereby reducing the service latency. However, mobile users usually have various dynamic hidden features, such as their locations, user group tags, and mobility patterns. Such hidden features normally lead to uncertainties of the 5G-enabled MEC, such as user demand and processing delay. This poses significant challenges for the service caching and task offloading in a 5G-enabled MEC. In this paper, we investigate the problem of dynamic service caching and task offloading in a 5G-enabled MEC with user demand and processing delay uncertainties. We first propose an online learning algorithm for the problem with given user demands by utilizing the technique of Multi-Armed Bandits (MAB), and theoretically analyze the regret bound of the algorithm. We also propose a novel architecture of Generative Adversarial Networks (GAN) to accurately predict the user demands based on small samples of hidden features of mobile users. Based on the proposed GAN model, we then devise an efficient heuristic for the problem with the uncertainties of both user demand and processing delay. We finally evaluate the performance of the proposed algorithms by simulations based on a realistic dataset of user data. Experiment results show that the performance of the proposed algorithms outperform existing algorithms by around 15%.","PeriodicalId":158630,"journal":{"name":"2020 IEEE 40th International Conference on Distributed Computing Systems (ICDCS)","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125504695","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-11-01DOI: 10.1109/ICDCS47774.2020.00108
Mahmoud Ismail, Salman Niazi, Mauritz Sundell, Mikael Ronström, Seif Haridi, J. Dowling
Cloud service providers have aligned on availability zones as an important unit of failure and replication for storage systems. An availability zone (AZ) has independent power, networking, and cooling systems and consists of one or more data centers. Multiple AZs in close geographic proximity form a region that can support replicated low latency storage services that can survive the failure of one or more AZs. Recent reductions in inter-AZ latency have made synchronous replication protocols increasingly viable, instead of traditional quorum-based replication protocols. We introduce HopsFS-CL, a distributed hierarchical file system with support for high-availability (HA) across AZs, backed by AZ-aware synchronously replicated metadata and AZ-aware block replication. HopsFS-CL is a redesign of HopsFS, a version of HDFS with distributed metadata, and its design involved making replication protocols and block placement protocols AZ-aware at all layers of its stack: the metadata serving, the metadata storage, and block storage layers. In experiments on a real-world workload from Spotify, we show that HopsFS-CL, deployed in HA mode over 3 AZs, reaches 1.66 million ops/s, and has similar performance to HopsFS when deployed in a single AZ, while preserving the same semantics.
{"title":"Distributed Hierarchical File Systems strike back in the Cloud","authors":"Mahmoud Ismail, Salman Niazi, Mauritz Sundell, Mikael Ronström, Seif Haridi, J. Dowling","doi":"10.1109/ICDCS47774.2020.00108","DOIUrl":"https://doi.org/10.1109/ICDCS47774.2020.00108","url":null,"abstract":"Cloud service providers have aligned on availability zones as an important unit of failure and replication for storage systems. An availability zone (AZ) has independent power, networking, and cooling systems and consists of one or more data centers. Multiple AZs in close geographic proximity form a region that can support replicated low latency storage services that can survive the failure of one or more AZs. Recent reductions in inter-AZ latency have made synchronous replication protocols increasingly viable, instead of traditional quorum-based replication protocols. We introduce HopsFS-CL, a distributed hierarchical file system with support for high-availability (HA) across AZs, backed by AZ-aware synchronously replicated metadata and AZ-aware block replication. HopsFS-CL is a redesign of HopsFS, a version of HDFS with distributed metadata, and its design involved making replication protocols and block placement protocols AZ-aware at all layers of its stack: the metadata serving, the metadata storage, and block storage layers. In experiments on a real-world workload from Spotify, we show that HopsFS-CL, deployed in HA mode over 3 AZs, reaches 1.66 million ops/s, and has similar performance to HopsFS when deployed in a single AZ, while preserving the same semantics.","PeriodicalId":158630,"journal":{"name":"2020 IEEE 40th International Conference on Distributed Computing Systems (ICDCS)","volume":"366 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123948821","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-11-01DOI: 10.1109/ICDCS47774.2020.00018
Lei Xie, Jidong Zhai, Baodong Wu, Yuanbo Wang, Xingcheng Zhang, Peng Sun, Shengen Yan
Showing a promising future in improving resource utilization and accelerating training, elastic deep learning training has been attracting more and more attention recently. Nevertheless, existing approaches to provide elasticity have certain limitations. They either fail to fully explore the parallelism of deep learning training when scaling out or lack an efficient mechanism to replicate training states among different devices.To address these limitations, we design Elan, a generic and efficient elastic training system for deep learning. In Elan, we propose a novel hybrid scaling mechanism to make a good trade-off between training efficiency and model performance when exploring more parallelism. We exploit the topology of underlying devices to perform concurrent and IO-free training state replication. To avoid the high overhead of start and initialization, we further propose an asynchronous coordination mechanism. Powered by the above innovations, Elan can provide high-performance (~1s) migration, scaling in and scaling out support with negligible runtime overhead (<3‰). For elastic training of ResNet-50 on ImageNet, Elan improves the time to solution by 20%. For elastic scheduling, with the help of Elan, resource utilization is improved by 21%+ and job pending time is reduced by 43%+.
{"title":"Elan: Towards Generic and Efficient Elastic Training for Deep Learning","authors":"Lei Xie, Jidong Zhai, Baodong Wu, Yuanbo Wang, Xingcheng Zhang, Peng Sun, Shengen Yan","doi":"10.1109/ICDCS47774.2020.00018","DOIUrl":"https://doi.org/10.1109/ICDCS47774.2020.00018","url":null,"abstract":"Showing a promising future in improving resource utilization and accelerating training, elastic deep learning training has been attracting more and more attention recently. Nevertheless, existing approaches to provide elasticity have certain limitations. They either fail to fully explore the parallelism of deep learning training when scaling out or lack an efficient mechanism to replicate training states among different devices.To address these limitations, we design Elan, a generic and efficient elastic training system for deep learning. In Elan, we propose a novel hybrid scaling mechanism to make a good trade-off between training efficiency and model performance when exploring more parallelism. We exploit the topology of underlying devices to perform concurrent and IO-free training state replication. To avoid the high overhead of start and initialization, we further propose an asynchronous coordination mechanism. Powered by the above innovations, Elan can provide high-performance (~1s) migration, scaling in and scaling out support with negligible runtime overhead (<3‰). For elastic training of ResNet-50 on ImageNet, Elan improves the time to solution by 20%. For elastic scheduling, with the help of Elan, resource utilization is improved by 21%+ and job pending time is reduced by 43%+.","PeriodicalId":158630,"journal":{"name":"2020 IEEE 40th International Conference on Distributed Computing Systems (ICDCS)","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130239497","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-11-01DOI: 10.1109/ICDCS47774.2020.00060
M. Belotti, Stefano Moretti, M. Potop-Butucaru, Stefano Secci
In this paper we address the distributed cross-chain swap problem in the blockchain context where multiple agents exchange assets across multiple blockchain systems (e.g. trading Bitcoins for Litecoins or Ethers). We present a mathematical framework allowing to characterize blockchain swap protocols as the combination of a publishing and a commitment phase, where contracts are respectively published and then committed. We characterize the equilibria of existing cross-chain swap protocols (i.e., blockchain swap protocols exchanging assets among different blockchains). More precisely, we prove that following a swap protocol characterized by concurrent publishing of exchange contracts and snap (immediate) assets transfers is a Nash equilibrium. Furthermore, we prove that for protocols with a sequential publishing and commitment of the assets transfers, following the prescribed protocol is a sub-game perfect equilibrium.
{"title":"Game Theoretical Analysis of Cross-Chain Swaps","authors":"M. Belotti, Stefano Moretti, M. Potop-Butucaru, Stefano Secci","doi":"10.1109/ICDCS47774.2020.00060","DOIUrl":"https://doi.org/10.1109/ICDCS47774.2020.00060","url":null,"abstract":"In this paper we address the distributed cross-chain swap problem in the blockchain context where multiple agents exchange assets across multiple blockchain systems (e.g. trading Bitcoins for Litecoins or Ethers). We present a mathematical framework allowing to characterize blockchain swap protocols as the combination of a publishing and a commitment phase, where contracts are respectively published and then committed. We characterize the equilibria of existing cross-chain swap protocols (i.e., blockchain swap protocols exchanging assets among different blockchains). More precisely, we prove that following a swap protocol characterized by concurrent publishing of exchange contracts and snap (immediate) assets transfers is a Nash equilibrium. Furthermore, we prove that for protocols with a sequential publishing and commitment of the assets transfers, following the prescribed protocol is a sub-game perfect equilibrium.","PeriodicalId":158630,"journal":{"name":"2020 IEEE 40th International Conference on Distributed Computing Systems (ICDCS)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124493304","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-11-01DOI: 10.1109/icdcs47774.2020.00003
{"title":"[Copyright notice]","authors":"","doi":"10.1109/icdcs47774.2020.00003","DOIUrl":"https://doi.org/10.1109/icdcs47774.2020.00003","url":null,"abstract":"","PeriodicalId":158630,"journal":{"name":"2020 IEEE 40th International Conference on Distributed Computing Systems (ICDCS)","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114273773","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-11-01DOI: 10.1109/ICDCS47774.2020.00033
Li Yan, Shohaib Mahmud, Haiying Shen, N. Foutz, Joshua Anton
The effectiveness of dispatching rescue teams under a flooding disaster is crucial. However, previous emergency vehicle dispatching methods cannot handle flooding disaster situations, and previous rescue team dispatching methods cannot accurately estimate the positions of potential rescue requests or dispatch the rescue teams according to the real-time distribution of rescue requests. In this paper, we propose MobiRescue, a human Mobility based Rescue team dispatching system, that aims to maximize the total number of fulfilled rescue requests, minimize the rescue teams’ driving delay to the rescue requests’ positions and also the number of dispatched rescue teams. We studied a city-scale human mobility dataset for the Hurricane Florence, and found that the disaster impact severities are quite different in different regions, and people’s movement was significantly affected by the disaster, which means that the rescue teams’ driving routes should be adaptively adjusted. Then, we propose a Support Vector Machine (SVM) based method to predict the distribution of potential rescue requests on each road segment. Based on the predicted distribution, we develop a Reinforcement Learning (RL) based rescue team dispatching method to achieve the aforementioned goals. Our trace-driven experiments demonstrate the superior performance of MobiRescue over other comparison methods.
{"title":"MobiRescue: Reinforcement Learning based Rescue Team Dispatching in a Flooding Disaster","authors":"Li Yan, Shohaib Mahmud, Haiying Shen, N. Foutz, Joshua Anton","doi":"10.1109/ICDCS47774.2020.00033","DOIUrl":"https://doi.org/10.1109/ICDCS47774.2020.00033","url":null,"abstract":"The effectiveness of dispatching rescue teams under a flooding disaster is crucial. However, previous emergency vehicle dispatching methods cannot handle flooding disaster situations, and previous rescue team dispatching methods cannot accurately estimate the positions of potential rescue requests or dispatch the rescue teams according to the real-time distribution of rescue requests. In this paper, we propose MobiRescue, a human Mobility based Rescue team dispatching system, that aims to maximize the total number of fulfilled rescue requests, minimize the rescue teams’ driving delay to the rescue requests’ positions and also the number of dispatched rescue teams. We studied a city-scale human mobility dataset for the Hurricane Florence, and found that the disaster impact severities are quite different in different regions, and people’s movement was significantly affected by the disaster, which means that the rescue teams’ driving routes should be adaptively adjusted. Then, we propose a Support Vector Machine (SVM) based method to predict the distribution of potential rescue requests on each road segment. Based on the predicted distribution, we develop a Reinforcement Learning (RL) based rescue team dispatching method to achieve the aforementioned goals. Our trace-driven experiments demonstrate the superior performance of MobiRescue over other comparison methods.","PeriodicalId":158630,"journal":{"name":"2020 IEEE 40th International Conference on Distributed Computing Systems (ICDCS)","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115689292","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-11-01DOI: 10.1109/ICDCS47774.2020.00013
Ruide Zhang, Ning Zhang, A. Moini, W. Lou, Thomas Hou
Big data analytics is having a profound impact on many sectors of the economy by transforming raw data into actionable intelligence. However, increased use of sensitive business and private personal data with no or limited privacy safeguards has raised great concerns among individuals and government regulators. To address the growing tension between the need for data utility and the demand for data privacy, trusted execution environment (TEE) is being used in academic research as well as industrial application as a powerful primitive to enable confidential computation on the private data with only the result disclosed but not the original private data. While much of the current research has been focusing on protecting the TEE against attacks (e.g. side-channel information leakage), the security and privacy of the applications executing inside a TEE enclave has received little attention. The general attitude is that the application is running inside a trusted computing base (TCB), and therefore can be trusted. This assumption may not be valid when it comes to unverified third-party applications.In this paper, we present PrivacyScope, a static code analyzer designed to detect leakage of private data by an application code running in a TEE. PrivacyScope accomplishes this by analyzing the application code and identifying violations of a property called nonreversibility. We introduce nonreversibility since the classical noninterference property falls short of detecting private data leakage in certain scenarios, e.g., in machine learning (ML) programs where the program output is always related to (private) input data. Given its strict reliance on observable state, the noninterference falls short of detecting private data leakage in these situations. By design, PrivacyScope detects both explicit and implicit information leakage. The nonreversibility property is formally defined based on the noninterference property. Additionally, we describe the algorithms for PrivacyScope as extensions to the runtime semantics of a general language. To evaluate the efficacy of our approach and proof-of-feasibility prototype, we apply PrivacyScope to detect data leakage in select open-source ML code modules including linear regression, k-means clustering and collaborative filtering. Also, PrivacyScope can detect intentional data leakage code injected by a programmer. We responsibly disclosed all the discovered vulnerabilities leading to disclosure of private data in the open-source ML program we analyzed.
{"title":"PrivacyScope: Automatic Analysis of Private Data Leakage in TEE-Protected Applications","authors":"Ruide Zhang, Ning Zhang, A. Moini, W. Lou, Thomas Hou","doi":"10.1109/ICDCS47774.2020.00013","DOIUrl":"https://doi.org/10.1109/ICDCS47774.2020.00013","url":null,"abstract":"Big data analytics is having a profound impact on many sectors of the economy by transforming raw data into actionable intelligence. However, increased use of sensitive business and private personal data with no or limited privacy safeguards has raised great concerns among individuals and government regulators. To address the growing tension between the need for data utility and the demand for data privacy, trusted execution environment (TEE) is being used in academic research as well as industrial application as a powerful primitive to enable confidential computation on the private data with only the result disclosed but not the original private data. While much of the current research has been focusing on protecting the TEE against attacks (e.g. side-channel information leakage), the security and privacy of the applications executing inside a TEE enclave has received little attention. The general attitude is that the application is running inside a trusted computing base (TCB), and therefore can be trusted. This assumption may not be valid when it comes to unverified third-party applications.In this paper, we present PrivacyScope, a static code analyzer designed to detect leakage of private data by an application code running in a TEE. PrivacyScope accomplishes this by analyzing the application code and identifying violations of a property called nonreversibility. We introduce nonreversibility since the classical noninterference property falls short of detecting private data leakage in certain scenarios, e.g., in machine learning (ML) programs where the program output is always related to (private) input data. Given its strict reliance on observable state, the noninterference falls short of detecting private data leakage in these situations. By design, PrivacyScope detects both explicit and implicit information leakage. The nonreversibility property is formally defined based on the noninterference property. Additionally, we describe the algorithms for PrivacyScope as extensions to the runtime semantics of a general language. To evaluate the efficacy of our approach and proof-of-feasibility prototype, we apply PrivacyScope to detect data leakage in select open-source ML code modules including linear regression, k-means clustering and collaborative filtering. Also, PrivacyScope can detect intentional data leakage code injected by a programmer. We responsibly disclosed all the discovered vulnerabilities leading to disclosure of private data in the open-source ML program we analyzed.","PeriodicalId":158630,"journal":{"name":"2020 IEEE 40th International Conference on Distributed Computing Systems (ICDCS)","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132552046","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-11-01DOI: 10.1109/ICDCS47774.2020.00136
Yin Li, Chunhui Duan, Xuan Ding, Cihang Liu
RFID is an increasingly ubiquitous technology widely adopted in both the industry and our daily life nowadays. But when it comes to eavesdropping, people usually pay attention to devices like cameras and mobile phones, instead of small-volume and battery-free RFID tags. This work shows the possibility of using prevalence RFIDs to capture and recognize the acoustic signals. To be specific, we attach an RFID tag on an object, which is located in the vicinity of the sound source. Our key innovation lies in the translation between the vibrations induced when the sound wave hits the object surface and the fluctuations in the tag’s RF signals. Although the inherent sampling rate of commercial RFID devices is deficient, and the vibrations are very subtle, we still extract characteristic features from imperfect measurements by taking advantage of state-of-the-art machine learning and signal processing algorithms. We have implemented our system with commercial RFID and loudspeaker equipment and evaluated it intensively in our lab environment. Experimental results show that the average success rate in detecting single tone sounds can reach as high as 93.10%. We believe our work would raise the attention of RFID in the concern of surveillance and security.
{"title":"TagMic: Listening Through RFID Signals","authors":"Yin Li, Chunhui Duan, Xuan Ding, Cihang Liu","doi":"10.1109/ICDCS47774.2020.00136","DOIUrl":"https://doi.org/10.1109/ICDCS47774.2020.00136","url":null,"abstract":"RFID is an increasingly ubiquitous technology widely adopted in both the industry and our daily life nowadays. But when it comes to eavesdropping, people usually pay attention to devices like cameras and mobile phones, instead of small-volume and battery-free RFID tags. This work shows the possibility of using prevalence RFIDs to capture and recognize the acoustic signals. To be specific, we attach an RFID tag on an object, which is located in the vicinity of the sound source. Our key innovation lies in the translation between the vibrations induced when the sound wave hits the object surface and the fluctuations in the tag’s RF signals. Although the inherent sampling rate of commercial RFID devices is deficient, and the vibrations are very subtle, we still extract characteristic features from imperfect measurements by taking advantage of state-of-the-art machine learning and signal processing algorithms. We have implemented our system with commercial RFID and loudspeaker equipment and evaluated it intensively in our lab environment. Experimental results show that the average success rate in detecting single tone sounds can reach as high as 93.10%. We believe our work would raise the attention of RFID in the concern of surveillance and security.","PeriodicalId":158630,"journal":{"name":"2020 IEEE 40th International Conference on Distributed Computing Systems (ICDCS)","volume":"177 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131646582","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-11-01DOI: 10.1109/ICDCS47774.2020.00103
Lewis Tseng, Qinzi Zhang, Saptaparni Kumar, Yifan Zhang
Fault-tolerant distributed consensus is an important primitive in many large-scale distributed systems and applications. The consensus problem has been investigated under various fault models in the literature since the seminal work by Lamport et al. in 1982. In this paper, we study the exact consensus problem in a new faulty link model, namely global asymmetric Byzantine (GAB) link model. Our link-fault model is simple, yet to our surprise, not studied before.In our system, all the nodes are fault-free and each pair of nodes can communicate directly with each other. In the GAB link model, up to f directed links may become Byzantine, and have arbitrary behavior. Non-faulty links deliver messages reliably. In our model, it is possible that the link from node a to node b is faulty, but the link from node b to node a is fault-free. Unlike some prior models with a local constraint, which enforced a local upper bound on the number of failure links attached to each node, we adopt the global constraint, which allows any link to be corrupted in the GAB model. These global and asymmetric features distinguish our model from all prior faulty link models.In our GAB model, we study the consensus problem in both synchronous and asynchronous systems. We show that 2f + 1 nodes is both necessary and sufficient for solving synchronous consensus, whereas 2f+2 nodes is the tight condition on resilience for solving asynchronous consensus. We also study the models where faulty links are mobile (or transient), i.e., the set of faulty links might change from round to round. We show that 2f + 3 nodes is necessary and sufficient for a family of algorithms that update local state in an iterative fashion.
{"title":"Exact Consensus under Global Asymmetric Byzantine Links","authors":"Lewis Tseng, Qinzi Zhang, Saptaparni Kumar, Yifan Zhang","doi":"10.1109/ICDCS47774.2020.00103","DOIUrl":"https://doi.org/10.1109/ICDCS47774.2020.00103","url":null,"abstract":"Fault-tolerant distributed consensus is an important primitive in many large-scale distributed systems and applications. The consensus problem has been investigated under various fault models in the literature since the seminal work by Lamport et al. in 1982. In this paper, we study the exact consensus problem in a new faulty link model, namely global asymmetric Byzantine (GAB) link model. Our link-fault model is simple, yet to our surprise, not studied before.In our system, all the nodes are fault-free and each pair of nodes can communicate directly with each other. In the GAB link model, up to f directed links may become Byzantine, and have arbitrary behavior. Non-faulty links deliver messages reliably. In our model, it is possible that the link from node a to node b is faulty, but the link from node b to node a is fault-free. Unlike some prior models with a local constraint, which enforced a local upper bound on the number of failure links attached to each node, we adopt the global constraint, which allows any link to be corrupted in the GAB model. These global and asymmetric features distinguish our model from all prior faulty link models.In our GAB model, we study the consensus problem in both synchronous and asynchronous systems. We show that 2f + 1 nodes is both necessary and sufficient for solving synchronous consensus, whereas 2f+2 nodes is the tight condition on resilience for solving asynchronous consensus. We also study the models where faulty links are mobile (or transient), i.e., the set of faulty links might change from round to round. We show that 2f + 3 nodes is necessary and sufficient for a family of algorithms that update local state in an iterative fashion.","PeriodicalId":158630,"journal":{"name":"2020 IEEE 40th International Conference on Distributed Computing Systems (ICDCS)","volume":"208 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125688075","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-11-01DOI: 10.1109/ICDCS47774.2020.00187
Ting Xu, Mingde Zhao
With the expansion of the scale of wireless sensor networks, people have higher requirements for the real-time nature of receiving information and the power consumption of network equipment. LoRa technology for low-power wide-area networks has been produced, but the energy consumption of network terminal nodes is still facing huge problems. Challenge. In this thesis, the REDS strategy based on LoRa technology is proposed. First, the residual energy is used to detect the preamble in the beacon, so that CAD can judge the effective preamble from the noise to avoid false wake up. Then use the remaining energy of the nodes in the remote Sink area of the network to activate the node and perform CAD channel detection. The node selects the communication access method according to the channel quality. When the network traffic load is low, the node communicates according to the CSMA-CA competition method. When the traffic load in the network is high, the nodes communicate and access according to the dynamic duty cycle method. It is proved that the network delay can be reduced by 29% and the energy efficiency can be improved by 19%. Thereby avoiding data collisions, improving channel utilization, and weighing the trade-off between network delay and network energy consumption.
{"title":"A LoRaWAN-MAC Protocol Based on WSN Residual Energy to Adjust Duty Cycle*","authors":"Ting Xu, Mingde Zhao","doi":"10.1109/ICDCS47774.2020.00187","DOIUrl":"https://doi.org/10.1109/ICDCS47774.2020.00187","url":null,"abstract":"With the expansion of the scale of wireless sensor networks, people have higher requirements for the real-time nature of receiving information and the power consumption of network equipment. LoRa technology for low-power wide-area networks has been produced, but the energy consumption of network terminal nodes is still facing huge problems. Challenge. In this thesis, the REDS strategy based on LoRa technology is proposed. First, the residual energy is used to detect the preamble in the beacon, so that CAD can judge the effective preamble from the noise to avoid false wake up. Then use the remaining energy of the nodes in the remote Sink area of the network to activate the node and perform CAD channel detection. The node selects the communication access method according to the channel quality. When the network traffic load is low, the node communicates according to the CSMA-CA competition method. When the traffic load in the network is high, the nodes communicate and access according to the dynamic duty cycle method. It is proved that the network delay can be reduced by 29% and the energy efficiency can be improved by 19%. Thereby avoiding data collisions, improving channel utilization, and weighing the trade-off between network delay and network energy consumption.","PeriodicalId":158630,"journal":{"name":"2020 IEEE 40th International Conference on Distributed Computing Systems (ICDCS)","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121616989","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}