Pub Date : 2004-03-24DOI: 10.1109/ICDCS.2004.1281591
Prasanna Ganesan, K. Gummadi, H. Garcia-Molina
Distributed hash tables have been proposed as flat, nonhierarchical structures, in contrast to most scalable distributed systems of the past. We show how to construct hierarchical DHTs while retaining the homogeneity of load and functionality offered by flat designs. Our generic construction, Canon, offers the same routing state vs. routing hops trade-off provided by standard DHT designs. The advantages of Canon include (but are not limited to) (a) fault isolation, (b) efficient caching and effective bandwidth usage for multicast, (c) adaptation to the underlying physical network, (d) hierarchical storage of content, and (e) hierarchical access control. Canon can be applied to many different proposed DHTs to construct their Canonical versions. We show how four different DHTs - Chord, Symphony, CAN and Kademlia - can be converted into their Canonical versions that we call Crescendo, Cacophony, Can-Can and Kandy respectively.
{"title":"Canon in G major: designing DHTs with hierarchical structure","authors":"Prasanna Ganesan, K. Gummadi, H. Garcia-Molina","doi":"10.1109/ICDCS.2004.1281591","DOIUrl":"https://doi.org/10.1109/ICDCS.2004.1281591","url":null,"abstract":"Distributed hash tables have been proposed as flat, nonhierarchical structures, in contrast to most scalable distributed systems of the past. We show how to construct hierarchical DHTs while retaining the homogeneity of load and functionality offered by flat designs. Our generic construction, Canon, offers the same routing state vs. routing hops trade-off provided by standard DHT designs. The advantages of Canon include (but are not limited to) (a) fault isolation, (b) efficient caching and effective bandwidth usage for multicast, (c) adaptation to the underlying physical network, (d) hierarchical storage of content, and (e) hierarchical access control. Canon can be applied to many different proposed DHTs to construct their Canonical versions. We show how four different DHTs - Chord, Symphony, CAN and Kademlia - can be converted into their Canonical versions that we call Crescendo, Cacophony, Can-Can and Kandy respectively.","PeriodicalId":348300,"journal":{"name":"24th International Conference on Distributed Computing Systems, 2004. Proceedings.","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-03-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124403849","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2004-03-24DOI: 10.1109/ICDCS.2004.1281606
D. Xuan, S. Chellappan, Xun Wang, Shengquan Wang
Distributed denial of service (DDoS) attacks are currently major threats to communication in the Internet. A secure overlay services (SOS) architecture has been proposed to provide reliable communication between clients and a target under DDoS attacks. The SOS architecture employs a set of overlay nodes arranged in three hierarchical layers that controls access to the target. Although the architecture is novel and works well under simple congestion based attacks, we observe that it is vulnerable under more intelligent attacks. We generalize the SOS architecture by introducing more flexibility in layering to the original architecture. We define two intelligent DDoS attack models and develop an analytical approach to study the impacts of the number of layers, number of neighbors per node and the node distribution per layer on the system performance under these two attack models. Our data clearly demonstrate that performance is indeed sensitive to the design features and the different design features interact with each other to impact overall system performance.
{"title":"Analyzing the secure overlay services architecture under intelligent DDoS attacks","authors":"D. Xuan, S. Chellappan, Xun Wang, Shengquan Wang","doi":"10.1109/ICDCS.2004.1281606","DOIUrl":"https://doi.org/10.1109/ICDCS.2004.1281606","url":null,"abstract":"Distributed denial of service (DDoS) attacks are currently major threats to communication in the Internet. A secure overlay services (SOS) architecture has been proposed to provide reliable communication between clients and a target under DDoS attacks. The SOS architecture employs a set of overlay nodes arranged in three hierarchical layers that controls access to the target. Although the architecture is novel and works well under simple congestion based attacks, we observe that it is vulnerable under more intelligent attacks. We generalize the SOS architecture by introducing more flexibility in layering to the original architecture. We define two intelligent DDoS attack models and develop an analytical approach to study the impacts of the number of layers, number of neighbors per node and the node distribution per layer on the system performance under these two attack models. Our data clearly demonstrate that performance is indeed sensitive to the design features and the different design features interact with each other to impact overall system performance.","PeriodicalId":348300,"journal":{"name":"24th International Conference on Distributed Computing Systems, 2004. Proceedings.","volume":"246 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-03-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114603619","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2004-03-24DOI: 10.1109/ICDCS.2004.1281600
M. Karlsson, C. Karamanolis
Data replication is used extensively in wide-area distributed systems to achieve low data-access latency. A large number of heuristics have been proposed to perform replica placement. Practical experience indicates that the choice of heuristic makes a big difference in terms of the cost of required infrastructure (e.g., storage capacity and network bandwidth), depending on system topology, workload and performance goals. We describe a method to assist system designers choose placement heuristics that meet their performance goals for the lowest possible infrastructure cost. Existing heuristics are classified according to a number of properties. The inherent cost (lower bound) for each class of heuristics is obtained for given system, workload and performance goals. The system designer compares different classes of heuristics on the basis of these lower bounds. Experimental results show that choosing a heuristic with the proposed methodology results in up to 7 times lower cost compared to using an "obvious " heuristic, such as caching.
{"title":"Choosing replica placement heuristics for wide-area systems","authors":"M. Karlsson, C. Karamanolis","doi":"10.1109/ICDCS.2004.1281600","DOIUrl":"https://doi.org/10.1109/ICDCS.2004.1281600","url":null,"abstract":"Data replication is used extensively in wide-area distributed systems to achieve low data-access latency. A large number of heuristics have been proposed to perform replica placement. Practical experience indicates that the choice of heuristic makes a big difference in terms of the cost of required infrastructure (e.g., storage capacity and network bandwidth), depending on system topology, workload and performance goals. We describe a method to assist system designers choose placement heuristics that meet their performance goals for the lowest possible infrastructure cost. Existing heuristics are classified according to a number of properties. The inherent cost (lower bound) for each class of heuristics is obtained for given system, workload and performance goals. The system designer compares different classes of heuristics on the basis of these lower bounds. Experimental results show that choosing a heuristic with the proposed methodology results in up to 7 times lower cost compared to using an \"obvious \" heuristic, such as caching.","PeriodicalId":348300,"journal":{"name":"24th International Conference on Distributed Computing Systems, 2004. Proceedings.","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-03-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130147617","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2004-03-24DOI: 10.1109/ICDCS.2004.1281563
William Leal, A. Arora
Objections to the practical use of stabilization have centered around problems of scale. Because of potential interferences between actions, global reasoning over the entire system is in general necessary. The complexity of this task increases dramatically as systems grow in size. Alternatives to dealing with this complexity focus on reset and composition. For reset, the problem is that any fault, no matter how minor, will cause a complete system reset with potentially significant lack of availability. For existing compositional alternatives, including compositional reset, severe restrictions on candidate systems are imposed. To address these issues, we give a framework for composition in which global reasoning and detailed system knowledge are not necessary, and which apply to a significantly wider range of systems than has hitherto been possible. We explicitly identify for each component which other components it can corrupt. Additionally, the correction of one component often depends on the prior correction of one or more other components, constraining the order in which correction can take place. Given appropriate component stabilizers such as detectors and correctors, we offer several ways to coordinate system correction, depending on what is actually known about the corruption and correction relations. By reducing the design of and reasoning about stabilization to local activities involving each component and the neighbors with which it interacts, the framework is scalable. Reset is generally avoided by using the correction relation to check and correct only where necessary. By including both correction and corruption relations, the framework subsumes and extends other compositional approaches. Though not directly a part of this work, we mention tools and techniques that can be used to help calculate the dependency and corruption relations and to help create the necessary stabilizers. To illustrate the theory, we show how this framework has been applied in our work in sensor networks.
{"title":"Scalable self-stabilization via composition","authors":"William Leal, A. Arora","doi":"10.1109/ICDCS.2004.1281563","DOIUrl":"https://doi.org/10.1109/ICDCS.2004.1281563","url":null,"abstract":"Objections to the practical use of stabilization have centered around problems of scale. Because of potential interferences between actions, global reasoning over the entire system is in general necessary. The complexity of this task increases dramatically as systems grow in size. Alternatives to dealing with this complexity focus on reset and composition. For reset, the problem is that any fault, no matter how minor, will cause a complete system reset with potentially significant lack of availability. For existing compositional alternatives, including compositional reset, severe restrictions on candidate systems are imposed. To address these issues, we give a framework for composition in which global reasoning and detailed system knowledge are not necessary, and which apply to a significantly wider range of systems than has hitherto been possible. We explicitly identify for each component which other components it can corrupt. Additionally, the correction of one component often depends on the prior correction of one or more other components, constraining the order in which correction can take place. Given appropriate component stabilizers such as detectors and correctors, we offer several ways to coordinate system correction, depending on what is actually known about the corruption and correction relations. By reducing the design of and reasoning about stabilization to local activities involving each component and the neighbors with which it interacts, the framework is scalable. Reset is generally avoided by using the correction relation to check and correct only where necessary. By including both correction and corruption relations, the framework subsumes and extends other compositional approaches. Though not directly a part of this work, we mention tools and techniques that can be used to help calculate the dependency and corruption relations and to help create the necessary stabilizers. To illustrate the theory, we show how this framework has been applied in our work in sensor networks.","PeriodicalId":348300,"journal":{"name":"24th International Conference on Distributed Computing Systems, 2004. Proceedings.","volume":"63 4","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-03-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134005619","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2004-03-24DOI: 10.1109/ICDCS.2004.1281630
Mea Wang, Baochun Li, Zongpeng Li
Existing research work towards the composition of complex federated services has assumed that service requests and deliveries flow through a particular service path or tree. Here, we extend such a service model to a directed acyclic graph, allowing services to be delivered via parallel paths and interleaved with each other. Such an assumption of the service flow model has apparently introduced complexities towards the development of a distributed algorithm to federate existing services, as well as the provisioning of the required quality in the most resource-efficient fashion. To this end, we propose sFlow, a fully distributed algorithm to be executed on all service nodes, such that the federated service flow graph is resource efficient, performs well, and meets the demands of service consumers.
{"title":"sFlow: towards resource-efficient and agile service federation in service overlay networks","authors":"Mea Wang, Baochun Li, Zongpeng Li","doi":"10.1109/ICDCS.2004.1281630","DOIUrl":"https://doi.org/10.1109/ICDCS.2004.1281630","url":null,"abstract":"Existing research work towards the composition of complex federated services has assumed that service requests and deliveries flow through a particular service path or tree. Here, we extend such a service model to a directed acyclic graph, allowing services to be delivered via parallel paths and interleaved with each other. Such an assumption of the service flow model has apparently introduced complexities towards the development of a distributed algorithm to federate existing services, as well as the provisioning of the required quality in the most resource-efficient fashion. To this end, we propose sFlow, a fully distributed algorithm to be executed on all service nodes, such that the federated service flow graph is resource efficient, performs well, and meets the demands of service consumers.","PeriodicalId":348300,"journal":{"name":"24th International Conference on Distributed Computing Systems, 2004. Proceedings.","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-03-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117327547","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2004-03-24DOI: 10.1109/ICDCS.2004.1281597
M. Gouda, A. Liu
A firewall is often placed at the entrance of each private network in the Internet. The function of a firewall is to examine each packet that passes through the entrance and decide whether to accept the packet and allow it to proceed or to discard the packet. A firewall is usually designed as a sequence of rules. To make a decision concerning some packets, the firewall rules are compared, one by one, with the packet until one rule is found to be satisfied by the packet: this rule determines the fate of the packet. We present the first ever method for designing the sequence of rules in a firewall to be consistent, complete, and compact. Consistency means that the rules are ordered correctly, completeness means that every packet satisfies at least one rule in the firewall, and compactness means that the firewall has no redundant rules. Our method starts by designing a firewall decision diagram (FDD, for short) whose consistency and completeness can be checked systematically (by an algorithm). We then apply a sequence of five algorithms to this FDD to generate, reduce and simplify the target firewall rules while maintaining the consistency and completeness of the original FDD.
{"title":"Firewall design: consistency, completeness, and compactness","authors":"M. Gouda, A. Liu","doi":"10.1109/ICDCS.2004.1281597","DOIUrl":"https://doi.org/10.1109/ICDCS.2004.1281597","url":null,"abstract":"A firewall is often placed at the entrance of each private network in the Internet. The function of a firewall is to examine each packet that passes through the entrance and decide whether to accept the packet and allow it to proceed or to discard the packet. A firewall is usually designed as a sequence of rules. To make a decision concerning some packets, the firewall rules are compared, one by one, with the packet until one rule is found to be satisfied by the packet: this rule determines the fate of the packet. We present the first ever method for designing the sequence of rules in a firewall to be consistent, complete, and compact. Consistency means that the rules are ordered correctly, completeness means that every packet satisfies at least one rule in the firewall, and compactness means that the firewall has no redundant rules. Our method starts by designing a firewall decision diagram (FDD, for short) whose consistency and completeness can be checked systematically (by an algorithm). We then apply a sequence of five algorithms to this FDD to generate, reduce and simplify the target firewall rules while maintaining the consistency and completeness of the original FDD.","PeriodicalId":348300,"journal":{"name":"24th International Conference on Distributed Computing Systems, 2004. Proceedings.","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-03-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115435578","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2004-03-24DOI: 10.1109/ICDCS.2004.1281581
Song Jiang, Xiaodong Zhang
In a large client/server cluster system, file blocks are cached in a multilevel storage hierarchy. Existing file block placement and replacement are either conducted on each level of the hierarchy independently, or by applying an LRU policy on more than one levels. One major limitation of these schemes is that hierarchical locality of file blocks with nonuniform strengths is ignored, resulting in many unnecessary block misses, or additional communication overhead. To address this issue, we propose a client-directed, coordinated file block placement and replacement protocol, where the nonuniform strengths of locality are dynamically identified on the client level to direct servers on placing or replacing file blocks accordingly on different levels of the buffer caches. In other words, the caching layout of the blocks in the hierarchy dynamically matches the locality of block accesses. The effectiveness of our proposed protocol comes from achieving the following three goals: (1) The multilevel cache retains the same hit rate as that of a single level cache whose size equals to the aggregate size of multilevel caches. (2) The nonuniform locality strengths of blocks are fully exploited and ranked to fit into the physical multilevel caches. (3) The communication overheads between caches are also reduced.
{"title":"ULC: a file block placement and replacement protocol to effectively exploit hierarchical locality in multi-level buffer caches","authors":"Song Jiang, Xiaodong Zhang","doi":"10.1109/ICDCS.2004.1281581","DOIUrl":"https://doi.org/10.1109/ICDCS.2004.1281581","url":null,"abstract":"In a large client/server cluster system, file blocks are cached in a multilevel storage hierarchy. Existing file block placement and replacement are either conducted on each level of the hierarchy independently, or by applying an LRU policy on more than one levels. One major limitation of these schemes is that hierarchical locality of file blocks with nonuniform strengths is ignored, resulting in many unnecessary block misses, or additional communication overhead. To address this issue, we propose a client-directed, coordinated file block placement and replacement protocol, where the nonuniform strengths of locality are dynamically identified on the client level to direct servers on placing or replacing file blocks accordingly on different levels of the buffer caches. In other words, the caching layout of the blocks in the hierarchy dynamically matches the locality of block accesses. The effectiveness of our proposed protocol comes from achieving the following three goals: (1) The multilevel cache retains the same hit rate as that of a single level cache whose size equals to the aggregate size of multilevel caches. (2) The nonuniform locality strengths of blocks are fully exploited and ranked to fit into the physical multilevel caches. (3) The communication overheads between caches are also reduced.","PeriodicalId":348300,"journal":{"name":"24th International Conference on Distributed Computing Systems, 2004. Proceedings.","volume":"119 ","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-03-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120868554","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2004-03-24DOI: 10.1109/ICDCS.2004.1281633
A. Riabov, Zhen Liu, Li Zhang
Overlay multicast (or application-level multicast) has become an increasingly popular alternative to IP-supported multicast. End nodes participating in overlay multicast can form a directed tree rooted at the source using existing unicast links. For each receiving node there is always only one incoming link. Very often, nodes can support no more than a fixed number of outgoing links due to bandwidth constraints. Here, we describe an algorithm for constructing a multicast tree with the objective of minimizing the maximum communication delay (i.e. the longest path in the tree), while satisfying degree constraints at nodes. We show that the algorithm is a constant-factor approximation algorithm. We further prove that the algorithm is asymptotically optimal if the communicating nodes can be mapped into Euclidean space such that the nodes are uniformly distributed in a convex region. We evaluate the performance of the algorithm using randomly generated configurations of up to 5,000,000 nodes.
{"title":"Overlay multicast trees of minimal delay","authors":"A. Riabov, Zhen Liu, Li Zhang","doi":"10.1109/ICDCS.2004.1281633","DOIUrl":"https://doi.org/10.1109/ICDCS.2004.1281633","url":null,"abstract":"Overlay multicast (or application-level multicast) has become an increasingly popular alternative to IP-supported multicast. End nodes participating in overlay multicast can form a directed tree rooted at the source using existing unicast links. For each receiving node there is always only one incoming link. Very often, nodes can support no more than a fixed number of outgoing links due to bandwidth constraints. Here, we describe an algorithm for constructing a multicast tree with the objective of minimizing the maximum communication delay (i.e. the longest path in the tree), while satisfying degree constraints at nodes. We show that the algorithm is a constant-factor approximation algorithm. We further prove that the algorithm is asymptotically optimal if the communicating nodes can be mapped into Euclidean space such that the nodes are uniformly distributed in a convex region. We evaluate the performance of the algorithm using randomly generated configurations of up to 5,000,000 nodes.","PeriodicalId":348300,"journal":{"name":"24th International Conference on Distributed Computing Systems, 2004. Proceedings.","volume":"2014 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-03-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123933016","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2004-03-24DOI: 10.1109/ICDCS.2004.1281627
Yiping Shen, T. C. Lam, Jyh-Charn S. Liu, Wei Zhao
We propose a confidential logging and auditing service for distributed information systems. We propose a cluster-based TTP (trusted third party) architecture for the event log auditing services, so that no single TTP node can have the full knowledge of the logs, and thus no single node can misuse the log information without being detected. On the basis of a relaxed form of secure distributed computing paradigms, one can implement confidential auditing service so that the auditor can retrieve certain aggregated system information, e.g. the number of transactions, the total volume, the event traces, etc., without having to access the full log data. Similar to the peer relationship of routers to provide global network routing services, the mutually supported, mutually monitored cluster TTP architecture allows independent systems to collaborate in network-wide auditing without compromising their private information.
{"title":"On the confidential auditing of distributed computing systems","authors":"Yiping Shen, T. C. Lam, Jyh-Charn S. Liu, Wei Zhao","doi":"10.1109/ICDCS.2004.1281627","DOIUrl":"https://doi.org/10.1109/ICDCS.2004.1281627","url":null,"abstract":"We propose a confidential logging and auditing service for distributed information systems. We propose a cluster-based TTP (trusted third party) architecture for the event log auditing services, so that no single TTP node can have the full knowledge of the logs, and thus no single node can misuse the log information without being detected. On the basis of a relaxed form of secure distributed computing paradigms, one can implement confidential auditing service so that the auditor can retrieve certain aggregated system information, e.g. the number of transactions, the total volume, the event traces, etc., without having to access the full log data. Similar to the peer relationship of routers to provide global network routing services, the mutually supported, mutually monitored cluster TTP architecture allows independent systems to collaborate in network-wide auditing without compromising their private information.","PeriodicalId":348300,"journal":{"name":"24th International Conference on Distributed Computing Systems, 2004. Proceedings.","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-03-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128946483","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2004-03-24DOI: 10.1109/ICDCS.2004.1281634
H. Yamaguchi, Akihito Hiromori, T. Higashino, K. Taniguchi
Here, we present a protocol for dynamically maintaining a degree-bounded delay sensitive spanning tree in a decentralized way on overlay networks. The protocol aims at repairing the spanning tree autonomously even if multiple node's leave operations or failures (disappearances) occur simultaneously or continuously in a specified period. It also aims at maintaining the diameter (maximum delay) of the tree as small as possible. The simulation results using ns-2 have shown that the protocol could keep reasonable diameters compared with the existing centralized static algorithm even if many node's participations and disappearances occur frequently.
{"title":"An autonomous and decentralized protocol for delay sensitive overlay multicast tree","authors":"H. Yamaguchi, Akihito Hiromori, T. Higashino, K. Taniguchi","doi":"10.1109/ICDCS.2004.1281634","DOIUrl":"https://doi.org/10.1109/ICDCS.2004.1281634","url":null,"abstract":"Here, we present a protocol for dynamically maintaining a degree-bounded delay sensitive spanning tree in a decentralized way on overlay networks. The protocol aims at repairing the spanning tree autonomously even if multiple node's leave operations or failures (disappearances) occur simultaneously or continuously in a specified period. It also aims at maintaining the diameter (maximum delay) of the tree as small as possible. The simulation results using ns-2 have shown that the protocol could keep reasonable diameters compared with the existing centralized static algorithm even if many node's participations and disappearances occur frequently.","PeriodicalId":348300,"journal":{"name":"24th International Conference on Distributed Computing Systems, 2004. Proceedings.","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-03-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130605854","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}