In secure multi-party shuffling, multiple parties, each holding an input, want to agree on a random permutation of their inputs while keeping the permutation secret. This problem is important as a primitive in many privacy-preserving applications such as anonymous communication, location-based services, and electronic voting. Known techniques for solving this problem suffer from poor scalability, load-balancing issues, trusted party assumptions, and/or weak security guarantees. In this paper, we propose an unconditionally-secure protocol for multi-party shuffling that scales well with the number of parties and is load-balanced. In particular, we require each party to send only a polylogarithmic number of bits and perform a polylogarithmic number of operations while incurring only a logarithmic round complexity. We show security under universal compos ability against up to about n/3 fully-malicious parties. We also provide simulation results in the full version of this paper showing that our protocol improves significantly over previous work. For example, for one million parties, when compared to the state of the art, our protocol reduces the communication and computation costs by at least three orders of magnitude and slightly decreases the number of communication rounds.
{"title":"Shuffle to Baffle: Towards Scalable Protocols for Secure Multi-party Shuffling","authors":"Mahnush Movahedi, Jared Saia, M. Zamani","doi":"10.1109/ICDCS.2015.116","DOIUrl":"https://doi.org/10.1109/ICDCS.2015.116","url":null,"abstract":"In secure multi-party shuffling, multiple parties, each holding an input, want to agree on a random permutation of their inputs while keeping the permutation secret. This problem is important as a primitive in many privacy-preserving applications such as anonymous communication, location-based services, and electronic voting. Known techniques for solving this problem suffer from poor scalability, load-balancing issues, trusted party assumptions, and/or weak security guarantees. In this paper, we propose an unconditionally-secure protocol for multi-party shuffling that scales well with the number of parties and is load-balanced. In particular, we require each party to send only a polylogarithmic number of bits and perform a polylogarithmic number of operations while incurring only a logarithmic round complexity. We show security under universal compos ability against up to about n/3 fully-malicious parties. We also provide simulation results in the full version of this paper showing that our protocol improves significantly over previous work. For example, for one million parties, when compared to the state of the art, our protocol reduces the communication and computation costs by at least three orders of magnitude and slightly decreases the number of communication rounds.","PeriodicalId":129182,"journal":{"name":"2015 IEEE 35th International Conference on Distributed Computing Systems","volume":"90 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-07-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132019671","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Despite suffering from inefficiency and flexibility limitations, the filter-based routing (FBR) algorithm is widely used in content-based publish/subscribe (pub/sub) systems. To address its limitations, we propose a dynamic destination-based routing algorithm called D-DBR, which decomposes pub/sub into two independent parts: Content-based matching and destination based multicasting. D-DBR exhibits low event matching cost and high efficiency, flexibility, and robustness for event routing in small-scale overlays. To improve its scalability to large-scale overlays, we further extend D-DBR to a new routing algorithm called MERC. MERC divides the overlay into interconnected clusters and applies content-based and destination-based mechanisms to route events inter- and intra-cluster, respectively. We implemented all algorithms in the PADRES pub/sub system. Experimental results show that our algorithms outperform the FBR algorithm.
{"title":"Towards Scalable Publish/Subscribe Systems","authors":"Shuping Ji, Chunyang Ye, Jun Wei, H. Jacobsen","doi":"10.1109/ICDCS.2015.108","DOIUrl":"https://doi.org/10.1109/ICDCS.2015.108","url":null,"abstract":"Despite suffering from inefficiency and flexibility limitations, the filter-based routing (FBR) algorithm is widely used in content-based publish/subscribe (pub/sub) systems. To address its limitations, we propose a dynamic destination-based routing algorithm called D-DBR, which decomposes pub/sub into two independent parts: Content-based matching and destination based multicasting. D-DBR exhibits low event matching cost and high efficiency, flexibility, and robustness for event routing in small-scale overlays. To improve its scalability to large-scale overlays, we further extend D-DBR to a new routing algorithm called MERC. MERC divides the overlay into interconnected clusters and applies content-based and destination-based mechanisms to route events inter- and intra-cluster, respectively. We implemented all algorithms in the PADRES pub/sub system. Experimental results show that our algorithms outperform the FBR algorithm.","PeriodicalId":129182,"journal":{"name":"2015 IEEE 35th International Conference on Distributed Computing Systems","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-07-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125099137","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
As a distributed system, a wireless network, usually faces a complex environment (transient faults and topology changes occur frequently). The connected dominating set (CDS) problem has been widely studied due to its important applications in wireless communication and networks, especially the important role as a virtual backbone for efficient routing. In this paper, under SINR (Signal-to-Interference-plus-Noise-Ratio) model, we propose a distributed self-stabilizing maximal independent set (MIS) algorithm (DSSMIS). Based on DSSMIS, we design a distributed self-stabilizing algorithm (DSSCDS) for CDS construction with constant approximation within O(log n) rounds. To best of our knowledge, this is the first self-stabilizing CDS algorithm under SINR model.
{"title":"A Self-Stabilizing Algorithm for CDS Construction with Constant Approximation in Wireless Networks under SINR Model","authors":"Jiguo Yu, Lili Jia, Wei Li, Xiuzhen Cheng, Shengling Wang, R. Bie, Dongxiao Yu","doi":"10.1109/ICDCS.2015.112","DOIUrl":"https://doi.org/10.1109/ICDCS.2015.112","url":null,"abstract":"As a distributed system, a wireless network, usually faces a complex environment (transient faults and topology changes occur frequently). The connected dominating set (CDS) problem has been widely studied due to its important applications in wireless communication and networks, especially the important role as a virtual backbone for efficient routing. In this paper, under SINR (Signal-to-Interference-plus-Noise-Ratio) model, we propose a distributed self-stabilizing maximal independent set (MIS) algorithm (DSSMIS). Based on DSSMIS, we design a distributed self-stabilizing algorithm (DSSCDS) for CDS construction with constant approximation within O(log n) rounds. To best of our knowledge, this is the first self-stabilizing CDS algorithm under SINR model.","PeriodicalId":129182,"journal":{"name":"2015 IEEE 35th International Conference on Distributed Computing Systems","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-07-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131529930","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Wireless charging has provided a convenient alternative to renew sensors' energy in wireless sensor networks. Due to physical limitations, previous works have only considered recharging a single node at a time, which has limited efficiency and scalability. Recent advance on multi-hop wireless charging is gaining momentum to provide fundamental support to address this problem. However, existing single-node charging designs do not consider and cannot take advantage of such opportunities. In this paper, we propose a new framework to enable multi-hop wireless charging using resonant repeaters. First, we present a realistic model that accounts for detailed physical factors to calculate charging efficiencies. Second, to achieve balance between energy efficiency and data latency, we propose a hybrid data gathering strategy that combines static and mobile data gathering to overcome their respective drawbacks and provide theoretical analysis. Then we formulate multi-hop recharge schedule into a bi-objective NP-hard optimization problem. We propose a two-step approximation algorithm that first finds the minimum charging cost and then calculates the charging vehicles' moving costs with bounded approximation ratios. Finally, upon discovering more room to reduce the total system cost, we develop a post-optimization algorithm that iteratively adds more stopping locations for charging vehicles to further improve the results. Our extensive simulations show that the proposed algorithms can handle dynamic energy demands effectively, and can cover at least three times of nodes and reduce service interruption time by an order of magnitude compared to the single-node charging scheme.
{"title":"Improve Charging Capability for Wireless Rechargeable Sensor Networks Using Resonant Repeaters","authors":"Cong Wang, Ji Li, Fan Ye, Yuanyuan Yang","doi":"10.1109/ICDCS.2015.22","DOIUrl":"https://doi.org/10.1109/ICDCS.2015.22","url":null,"abstract":"Wireless charging has provided a convenient alternative to renew sensors' energy in wireless sensor networks. Due to physical limitations, previous works have only considered recharging a single node at a time, which has limited efficiency and scalability. Recent advance on multi-hop wireless charging is gaining momentum to provide fundamental support to address this problem. However, existing single-node charging designs do not consider and cannot take advantage of such opportunities. In this paper, we propose a new framework to enable multi-hop wireless charging using resonant repeaters. First, we present a realistic model that accounts for detailed physical factors to calculate charging efficiencies. Second, to achieve balance between energy efficiency and data latency, we propose a hybrid data gathering strategy that combines static and mobile data gathering to overcome their respective drawbacks and provide theoretical analysis. Then we formulate multi-hop recharge schedule into a bi-objective NP-hard optimization problem. We propose a two-step approximation algorithm that first finds the minimum charging cost and then calculates the charging vehicles' moving costs with bounded approximation ratios. Finally, upon discovering more room to reduce the total system cost, we develop a post-optimization algorithm that iteratively adds more stopping locations for charging vehicles to further improve the results. Our extensive simulations show that the proposed algorithms can handle dynamic energy demands effectively, and can cover at least three times of nodes and reduce service interruption time by an order of magnitude compared to the single-node charging scheme.","PeriodicalId":129182,"journal":{"name":"2015 IEEE 35th International Conference on Distributed Computing Systems","volume":"110 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-07-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128949722","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A self-stabilizing system is one that converges to a legitimate state from any arbitrary state. Such an arbitrary state may be reachable due to wrong initialization or the occurrence of transient faults. Average recovery time of self-stabilizing systems is a key factor in evaluating their performance, especially in the domain of network and robotic protocols. This paper introduces a groundbreaking result on automated repair and synthesis of self-stabilizing protocols whose average recovery time is required to satisfy certain constraints. We show that synthesizing and repairing weak-stabilizing protocols under average recovery time constraints is NP-complete. To cope with the exponential complexity (unless P = NP), we propose a polynomial-time heuristic.
{"title":"Synthesizing Self-Stabilizing Protocols under Average Recovery Time Constraints","authors":"Saba Aflaki, Fathiyeh Faghih, Borzoo Bonakdarpour","doi":"10.1109/ICDCS.2015.65","DOIUrl":"https://doi.org/10.1109/ICDCS.2015.65","url":null,"abstract":"A self-stabilizing system is one that converges to a legitimate state from any arbitrary state. Such an arbitrary state may be reachable due to wrong initialization or the occurrence of transient faults. Average recovery time of self-stabilizing systems is a key factor in evaluating their performance, especially in the domain of network and robotic protocols. This paper introduces a groundbreaking result on automated repair and synthesis of self-stabilizing protocols whose average recovery time is required to satisfy certain constraints. We show that synthesizing and repairing weak-stabilizing protocols under average recovery time constraints is NP-complete. To cope with the exponential complexity (unless P = NP), we propose a polynomial-time heuristic.","PeriodicalId":129182,"journal":{"name":"2015 IEEE 35th International Conference on Distributed Computing Systems","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-07-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130169006","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ji Li, Siyao Cheng, Zhipeng Cai, Jiguo Yu, Chaokun Wang, Yingshu Li
Holistic aggregation results are important for users to obtain summary information from Wireless Sensor Networks (WSNs). Holistic aggregation requires all the sensory data to be sent to the sink, which costs a huge amount of energy. Fortunately, in most applications, approximate results are acceptable. We study the approximated holistic aggregation algorithms based on uniform sampling. In this paper, four holistic aggregation operations are investigated. The mathematical methods to construct their estimators and determine the optional sample size are proposed, and the correctness of these methods is proved. Four corresponding distributed holistic algorithms are presented. The theoretical analysis and simulation results show that the algorithms have high performance.
{"title":"Approximate Holistic Aggregation in Wireless Sensor Networks","authors":"Ji Li, Siyao Cheng, Zhipeng Cai, Jiguo Yu, Chaokun Wang, Yingshu Li","doi":"10.1145/3027488","DOIUrl":"https://doi.org/10.1145/3027488","url":null,"abstract":"Holistic aggregation results are important for users to obtain summary information from Wireless Sensor Networks (WSNs). Holistic aggregation requires all the sensory data to be sent to the sink, which costs a huge amount of energy. Fortunately, in most applications, approximate results are acceptable. We study the approximated holistic aggregation algorithms based on uniform sampling. In this paper, four holistic aggregation operations are investigated. The mathematical methods to construct their estimators and determine the optional sample size are proposed, and the correctness of these methods is proved. Four corresponding distributed holistic algorithms are presented. The theoretical analysis and simulation results show that the algorithms have high performance.","PeriodicalId":129182,"journal":{"name":"2015 IEEE 35th International Conference on Distributed Computing Systems","volume":"9 1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-07-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123721673","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A. Kumbhare, M. Frîncu, Yogesh L. Simmhan, V. Prasanna
The MapReduce programming model, due to its simplicity and scalability, has become an essential tool for processing large data volumes in distributed environments. Recent Stream Processing Systems (SPS) this model to provide low-latency analysis of high-velocity continuous data streams. However, integrating MapReduce with streaming poses challenges: first, the runtime variations in data characteristics such as data-rates and key-distribution cause resource overload, that in-turn leads to fluctuations in the Quality of the Service (QoS), and second, the stateful reducers, whose state depends on the complete tuple history, necessitates efficient fault-recovery mechanisms to maintain the desired QoS in the presence of resource failures. We propose an integrated streaming MapReduce architecture leveraging the concept of consistent hashing to support runtime elasticity along with locality-aware data and state replication to provide efficient load-balancing with low-overhead fault-tolerance and parallel fault-recovery from multiple simultaneous failures. Our evaluation on a private cloud shows up to 2.8× improvement in peak throughput compared to Apache Storm SPS, and a low recovery latency of 700 - 1500 ms from multiple failures.
{"title":"Fault-Tolerant and Elastic Streaming MapReduce with Decentralized Coordination","authors":"A. Kumbhare, M. Frîncu, Yogesh L. Simmhan, V. Prasanna","doi":"10.1109/ICDCS.2015.41","DOIUrl":"https://doi.org/10.1109/ICDCS.2015.41","url":null,"abstract":"The MapReduce programming model, due to its simplicity and scalability, has become an essential tool for processing large data volumes in distributed environments. Recent Stream Processing Systems (SPS) this model to provide low-latency analysis of high-velocity continuous data streams. However, integrating MapReduce with streaming poses challenges: first, the runtime variations in data characteristics such as data-rates and key-distribution cause resource overload, that in-turn leads to fluctuations in the Quality of the Service (QoS), and second, the stateful reducers, whose state depends on the complete tuple history, necessitates efficient fault-recovery mechanisms to maintain the desired QoS in the presence of resource failures. We propose an integrated streaming MapReduce architecture leveraging the concept of consistent hashing to support runtime elasticity along with locality-aware data and state replication to provide efficient load-balancing with low-overhead fault-tolerance and parallel fault-recovery from multiple simultaneous failures. Our evaluation on a private cloud shows up to 2.8× improvement in peak throughput compared to Apache Storm SPS, and a low recovery latency of 700 - 1500 ms from multiple failures.","PeriodicalId":129182,"journal":{"name":"2015 IEEE 35th International Conference on Distributed Computing Systems","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-07-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129284148","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Online behavioral advertising (OBA) has become one of the most successful advertising models on the Internet. Nevertheless, all existing OBA systems are broker-centric in the billing phase, which means it is the broker who exclusively determines advertisers' expenses and publishers' revenues. Consequently, a malicious broker may cheat in their tallying of ad clicks to overcharge advertisers or underpay publishers. Furthermore, as the broker cannot justify the bills, malicious advertisers may deny actual clicks to ask for refunds, and malicious publishers may claim non-existing clicks to demand extra revenue shares. This paper solves these problems by reversing the priority between the broker and the advertisers and publishers. Specifically, when users click on ads, it makes corresponding advertisers and publishers forward click reports of clients to the broker after checking, anonymizing and signing them. The broker then settles accounts with advertisers and publishers fully based on these reports. To guarantee the interests of the broker after the priority reversal, we further propose effective mechanisms for detecting underreporting advertisers and over reporting publishers, respectively.
{"title":"Advertiser and Publisher-centric Privacy Aware Online Behavioral Advertising","authors":"Jingyu Hua, An Tang, Sheng Zhong","doi":"10.1109/ICDCS.2015.38","DOIUrl":"https://doi.org/10.1109/ICDCS.2015.38","url":null,"abstract":"Online behavioral advertising (OBA) has become one of the most successful advertising models on the Internet. Nevertheless, all existing OBA systems are broker-centric in the billing phase, which means it is the broker who exclusively determines advertisers' expenses and publishers' revenues. Consequently, a malicious broker may cheat in their tallying of ad clicks to overcharge advertisers or underpay publishers. Furthermore, as the broker cannot justify the bills, malicious advertisers may deny actual clicks to ask for refunds, and malicious publishers may claim non-existing clicks to demand extra revenue shares. This paper solves these problems by reversing the priority between the broker and the advertisers and publishers. Specifically, when users click on ads, it makes corresponding advertisers and publishers forward click reports of clients to the broker after checking, anonymizing and signing them. The broker then settles accounts with advertisers and publishers fully based on these reports. To guarantee the interests of the broker after the priority reversal, we further propose effective mechanisms for detecting underreporting advertisers and over reporting publishers, respectively.","PeriodicalId":129182,"journal":{"name":"2015 IEEE 35th International Conference on Distributed Computing Systems","volume":"87 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-07-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116888606","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
An increasing amount of mobility data is being collected every day by different means, e.g., By mobile phone operators. This data is sometimes published after the application of simple anonymization techniques, which might lead to severe privacy threats. We propose in this paper a new solution whose novelty is two-fold. Firstly, we introduce an algorithm designed to hide places where a user stops during her journey (namely points of interest), by enforcing a constant speed along her trajectory. Secondly, we leverage places where users meet to take a chance to swap their trajectories and therefore confuse an attacker.
{"title":"Privacy-Preserving Publication of Mobility Data with High Utility","authors":"Vincent Primault, Sonia Ben Mokhtar, L. Brunie","doi":"10.1109/ICDCS.2015.117","DOIUrl":"https://doi.org/10.1109/ICDCS.2015.117","url":null,"abstract":"An increasing amount of mobility data is being collected every day by different means, e.g., By mobile phone operators. This data is sometimes published after the application of simple anonymization techniques, which might lead to severe privacy threats. We propose in this paper a new solution whose novelty is two-fold. Firstly, we introduce an algorithm designed to hide places where a user stops during her journey (namely points of interest), by enforcing a constant speed along her trajectory. Secondly, we leverage places where users meet to take a chance to swap their trajectories and therefore confuse an attacker.","PeriodicalId":129182,"journal":{"name":"2015 IEEE 35th International Conference on Distributed Computing Systems","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-06-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127389906","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Self-stabilizing algorithms are distributed algorithms supporting transient failures. Starting from any configuration, they allow the system to detect whether the actual configuration is legal, and, if not, they allow the system to eventually reach a legal configuration. In the context of network computing, it is known that, for every task, there is a self-stabilizing algorithm solving that task, with optimal space-complexity, but converging in an exponential number of rounds. On the other hand, it is also known that, for every task, there is a self-stabilizing algorithm solving that task in a linear number of rounds, but with large space-complexity. It is however not known whether for every task there exists a self-stabilizing algorithm that is simultaneously space-efficient and time-efficient. In this paper, we make a first attempt for answering the question of whether such an efficient algorithm exists for every task, by focussing on constrained spanning tree construction tasks. We present a general roadmap for the design of silent space-optimal self-stabilizing algorithms solving such tasks, converging in polynomially many rounds under the unfair scheduler. By applying our roadmap to the task of constructing minimum-weight spanning tree (MST), and to the task of constructing minimum-degree spanning tree (MDST), we provide algorithms that outperform previously known algorithms designed and optimized specifically for solving each of these two tasks.
{"title":"Space-Optimal Time-Efficient Silent Self-Stabilizing Constructions of Constrained Spanning Trees","authors":"Lélia Blin, P. Fraigniaud","doi":"10.1109/ICDCS.2015.66","DOIUrl":"https://doi.org/10.1109/ICDCS.2015.66","url":null,"abstract":"Self-stabilizing algorithms are distributed algorithms supporting transient failures. Starting from any configuration, they allow the system to detect whether the actual configuration is legal, and, if not, they allow the system to eventually reach a legal configuration. In the context of network computing, it is known that, for every task, there is a self-stabilizing algorithm solving that task, with optimal space-complexity, but converging in an exponential number of rounds. On the other hand, it is also known that, for every task, there is a self-stabilizing algorithm solving that task in a linear number of rounds, but with large space-complexity. It is however not known whether for every task there exists a self-stabilizing algorithm that is simultaneously space-efficient and time-efficient. In this paper, we make a first attempt for answering the question of whether such an efficient algorithm exists for every task, by focussing on constrained spanning tree construction tasks. We present a general roadmap for the design of silent space-optimal self-stabilizing algorithms solving such tasks, converging in polynomially many rounds under the unfair scheduler. By applying our roadmap to the task of constructing minimum-weight spanning tree (MST), and to the task of constructing minimum-degree spanning tree (MDST), we provide algorithms that outperform previously known algorithms designed and optimized specifically for solving each of these two tasks.","PeriodicalId":129182,"journal":{"name":"2015 IEEE 35th International Conference on Distributed Computing Systems","volume":"36 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-06-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124778030","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}