We identify a new type of stateful IDS evasion, named signature evasion. We formalize the signature evasion on those Stateful IDSs whose state can be modeled using Deterministic Finite State Automata (DFAs). We develop an efficient algorithm which operates on rule set DFAs and derives a minimal rectification of evasive paths. Finally, we evaluate our solution on Snort signatures, identify and rectify existing vulnerable flowbit rule sets
{"title":"Characterization and Solution to a Stateful IDS Evasion","authors":"I. Aib, Tung Tran, R. Boutaba","doi":"10.1109/ICDCS.2009.65","DOIUrl":"https://doi.org/10.1109/ICDCS.2009.65","url":null,"abstract":"We identify a new type of stateful IDS evasion, named signature evasion. We formalize the signature evasion on those Stateful IDSs whose state can be modeled using Deterministic Finite State Automata (DFAs). We develop an efficient algorithm which operates on rule set DFAs and derives a minimal rectification of evasive paths. Finally, we evaluate our solution on Snort signatures, identify and rectify existing vulnerable flowbit rule sets","PeriodicalId":387968,"journal":{"name":"2009 29th IEEE International Conference on Distributed Computing Systems","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2009-06-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121282698","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Spectrum handoff causes performance degradation of the cognitive network when the primary user reclaims its right to access the licensed spectrum. In a multi-hop cognitive network, this problem becomes even worse since multiple links are involved. Spectrum handoff of multiple links seriously affects the network connectivity and routing. In this paper, we describe a cross-layer optimization approach to solve the spectrum handoff problem with joint consideration of spectrum handoff scheduling and routing. We propose a protocol, called Joint Spectrum Handoff Scheduling and Routing Protocol (JSHRP). This paper makes the following major contributions. First, the concept "spectrum handoff of single link" is extended to "spectrum handoff of multiple links", termed as "multi-link spectrum handoff". Second, we define the problem of coordinating the spectrum handoff of multiple links to minimize the total spectrum handoff latency under the constraint of the network connectivity. This problem is proven to be NP-hard, and we propose both centralized and distributed greedy algorithms to minimize the total latency of spectrum handoff for multiple links in a multi-hop cognitive network. Moreover, we jointly design the rerouting mechanism with spectrum handoff scheduling algorithm to improve the network throughput. Different from previous works in which rerouting is performed after spectrum handoff, our rerouting mechanism is executed before the spectrum handoff really happens. Simulation results show that JSHRP improves the network performance by 50% and the higher degree of interference the cognitive network experiences, the more improvement our solution will bring to the network.
{"title":"Joint Optimization of Spectrum Handoff Scheduling and Routing in Multi-hop Multi-radio Cognitive Networks","authors":"W. Feng, Jiannong Cao, Chisheng Zhang, Chuda Liu","doi":"10.1109/ICDCS.2009.64","DOIUrl":"https://doi.org/10.1109/ICDCS.2009.64","url":null,"abstract":"Spectrum handoff causes performance degradation of the cognitive network when the primary user reclaims its right to access the licensed spectrum. In a multi-hop cognitive network, this problem becomes even worse since multiple links are involved. Spectrum handoff of multiple links seriously affects the network connectivity and routing. In this paper, we describe a cross-layer optimization approach to solve the spectrum handoff problem with joint consideration of spectrum handoff scheduling and routing. We propose a protocol, called Joint Spectrum Handoff Scheduling and Routing Protocol (JSHRP). This paper makes the following major contributions. First, the concept \"spectrum handoff of single link\" is extended to \"spectrum handoff of multiple links\", termed as \"multi-link spectrum handoff\". Second, we define the problem of coordinating the spectrum handoff of multiple links to minimize the total spectrum handoff latency under the constraint of the network connectivity. This problem is proven to be NP-hard, and we propose both centralized and distributed greedy algorithms to minimize the total latency of spectrum handoff for multiple links in a multi-hop cognitive network. Moreover, we jointly design the rerouting mechanism with spectrum handoff scheduling algorithm to improve the network throughput. Different from previous works in which rerouting is performed after spectrum handoff, our rerouting mechanism is executed before the spectrum handoff really happens. Simulation results show that JSHRP improves the network performance by 50% and the higher degree of interference the cognitive network experiences, the more improvement our solution will bring to the network.","PeriodicalId":387968,"journal":{"name":"2009 29th IEEE International Conference on Distributed Computing Systems","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2009-06-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124630109","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Content-based service, which dynamically routes and delivers events from sources to interested users, is extremely important to network services. However, existing content-based protocols for static networks will incur unaffordable maintenance costs if they are applied directly to the highly mobile environment that is featured in disruption-tolerant networks (DTNs). In this paper, we propose a unique publish/subscribe scheme that utilizes the long-term social network properties, which are observed in many DTNs, to facilitate content-based services in DTNs. We distributively construct communities based on the neighboring relationships from nodes' encounter histories. Brokers are deployed to bridge the communities, and they adopt a locally prioritized pub/sub scheme which combines the structural importance with subscription interests, to decide what events they should collect, store, and propagate. Different trade-offs for content-based service can be achieved by tuning the closeness threshold in community formation or by adjusting the broker-to-broker communication scheme. Extensive real-trace and synthetic-trace driven simulation results are presented to support the effectiveness of our scheme.
{"title":"MOPS: Providing Content-Based Service in Disruption-Tolerant Networks","authors":"Feng Li, Jie Wu","doi":"10.1109/ICDCS.2009.28","DOIUrl":"https://doi.org/10.1109/ICDCS.2009.28","url":null,"abstract":"Content-based service, which dynamically routes and delivers events from sources to interested users, is extremely important to network services. However, existing content-based protocols for static networks will incur unaffordable maintenance costs if they are applied directly to the highly mobile environment that is featured in disruption-tolerant networks (DTNs). In this paper, we propose a unique publish/subscribe scheme that utilizes the long-term social network properties, which are observed in many DTNs, to facilitate content-based services in DTNs. We distributively construct communities based on the neighboring relationships from nodes' encounter histories. Brokers are deployed to bridge the communities, and they adopt a locally prioritized pub/sub scheme which combines the structural importance with subscription interests, to decide what events they should collect, store, and propagate. Different trade-offs for content-based service can be achieved by tuning the closeness threshold in community formation or by adjusting the broker-to-broker communication scheme. Extensive real-trace and synthetic-trace driven simulation results are presented to support the effectiveness of our scheme.","PeriodicalId":387968,"journal":{"name":"2009 29th IEEE International Conference on Distributed Computing Systems","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2009-06-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122216147","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Lanyue Lu, P. Sarkar, Dinesh Subhraveti, S. Sarkar, Mark Seaman, Reshu Jain, Ahmed Bashir
This paper presents CARP, an integrated program and storage replication solution. CARP extends program replication systems which do not currently address storage errors, builds upon a record-and-replay scheme that handles nondeterminism in program execution, and uses a scheme based on recorded program state and I/O logs to enable efficient detection of silent data errors and efficient recovery from such errors. CARP is designed to be transparent to applications with minimal run-time impact and is general enough to be implemented on commodity machines. We implemented CARP as a prototype on the Linux operating system and conducted extensive sensitivity analysis of its overhead with different application profiles and system parameters. In particular, we evaluated CARP with standard unmodified email, database, and web server benchmarks and showed that it imposes acceptable overhead while providing sub-second program state recovery times on detecting a silent data error.
{"title":"CARP: Handling Silent Data Errors and Site Failures in an Integrated Program and Storage Replication Mechanism","authors":"Lanyue Lu, P. Sarkar, Dinesh Subhraveti, S. Sarkar, Mark Seaman, Reshu Jain, Ahmed Bashir","doi":"10.1109/ICDCS.2009.58","DOIUrl":"https://doi.org/10.1109/ICDCS.2009.58","url":null,"abstract":"This paper presents CARP, an integrated program and storage replication solution. CARP extends program replication systems which do not currently address storage errors, builds upon a record-and-replay scheme that handles nondeterminism in program execution, and uses a scheme based on recorded program state and I/O logs to enable efficient detection of silent data errors and efficient recovery from such errors. CARP is designed to be transparent to applications with minimal run-time impact and is general enough to be implemented on commodity machines. We implemented CARP as a prototype on the Linux operating system and conducted extensive sensitivity analysis of its overhead with different application profiles and system parameters. In particular, we evaluated CARP with standard unmodified email, database, and web server benchmarks and showed that it imposes acceptable overhead while providing sub-second program state recovery times on detecting a silent data error.","PeriodicalId":387968,"journal":{"name":"2009 29th IEEE International Conference on Distributed Computing Systems","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2009-06-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116606924","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Coverage is an essential task in sensor deployment for the design of wireless sensor networks. While most existing studies on coverage consider homogeneous sensors, the deployment of heterogeneous sensors represents more accurately the network design for real-world applications. In this paper, we focus on the problem of connected k-coverage in heterogeneous wireless sensor networks. Precisely, we distinguish two deployment strategies, where heterogeneous sensors are either randomly or pseudo-randomly distributed in a field. While the first deployment approach considers a single layer of heterogeneous sensors, the second one proposes a multi-tier architecture of heterogeneous sensors to better address the problems introduced by pure randomness and heterogeneity.
{"title":"On the Connected k-Coverage Problem in Heterogeneous Sensor Nets: The Curse of Randomness and Heterogeneity","authors":"H. Ammari, John Giudici","doi":"10.1109/ICDCS.2009.67","DOIUrl":"https://doi.org/10.1109/ICDCS.2009.67","url":null,"abstract":"Coverage is an essential task in sensor deployment for the design of wireless sensor networks. While most existing studies on coverage consider homogeneous sensors, the deployment of heterogeneous sensors represents more accurately the network design for real-world applications. In this paper, we focus on the problem of connected k-coverage in heterogeneous wireless sensor networks. Precisely, we distinguish two deployment strategies, where heterogeneous sensors are either randomly or pseudo-randomly distributed in a field. While the first deployment approach considers a single layer of heterogeneous sensors, the second one proposes a multi-tier architecture of heterogeneous sensors to better address the problems introduced by pure randomness and heterogeneity.","PeriodicalId":387968,"journal":{"name":"2009 29th IEEE International Conference on Distributed Computing Systems","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2009-06-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134228632","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Remote Direct Memory Access (RDMA) is a mechanism whereby data is moved directly between the application memory of the local and remote computer. In bypassing the operating system, RDMA significantly reduces the CPU cost of large data transfers and eliminates intermediate copying across buffers, thereby making it very attractive for implementing distributed applications. With the advent of hardware implementations of RDMA over Ethernet (iWARP), its advantages have become even more obvious. In this paper we analyze the applicability of RDMA and identify hidden costs in the setup of its interactions that, if not handled carefully, remove any performance advantage, especially in hardware implementations. From an application point of view, the major difference to TCP/IP based communication is that the buffer management has to be done explicitly by the application. Without the proper optimizations, RDMA loses all its advantages. We discuss the problem in detail, analyze what applications can profit from RDMA, present a number of optimization strategies, and show through extensive performance experiments that these optimizations make a substantial difference in the overall performance of RDMA based applications.
{"title":"Minimizing the Hidden Cost of RDMA","authors":"P. Frey, G. Alonso","doi":"10.1109/ICDCS.2009.32","DOIUrl":"https://doi.org/10.1109/ICDCS.2009.32","url":null,"abstract":"Remote Direct Memory Access (RDMA) is a mechanism whereby data is moved directly between the application memory of the local and remote computer. In bypassing the operating system, RDMA significantly reduces the CPU cost of large data transfers and eliminates intermediate copying across buffers, thereby making it very attractive for implementing distributed applications. With the advent of hardware implementations of RDMA over Ethernet (iWARP), its advantages have become even more obvious. In this paper we analyze the applicability of RDMA and identify hidden costs in the setup of its interactions that, if not handled carefully, remove any performance advantage, especially in hardware implementations. From an application point of view, the major difference to TCP/IP based communication is that the buffer management has to be done explicitly by the application. Without the proper optimizations, RDMA loses all its advantages. We discuss the problem in detail, analyze what applications can profit from RDMA, present a number of optimization strategies, and show through extensive performance experiments that these optimizations make a substantial difference in the overall performance of RDMA based applications.","PeriodicalId":387968,"journal":{"name":"2009 29th IEEE International Conference on Distributed Computing Systems","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2009-06-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132848532","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Spatial alarms are considered as one of the basic capabilities in future mobile computing systems for enabling personalization of location-based services. In this paper, we propose a distributed architecture and a suite of safe region techniques for scalable processing of spatial alarms. We show that safe region-based processing enables resource optimal distribution of partial alarm processing tasks from the server to the mobile clients. We propose three different safe region computation algorithms to explore the impact of size and shape of the safe region on network bandwidth, server load and client energy consumption. Concretely, we show that the maximum weighted perimeter rectangular safe region approach outperforms previous techniques in terms of performance and accuracy. We further explore finer granularity safe regions by introducing grid-based and pyramid-based representation of rectilinear polygonal shapes using bitmap encoding. Our experimental evaluation shows that the distributed safe region-based architecture outperforms the two most popular server-centric approaches, periodic and safe period-based, for spatial alarm processing.
{"title":"Distributed Processing of Spatial Alarms: A Safe Region-Based Approach","authors":"Bhuvan Bamba, Ling Liu, A. Iyengar, Philip S. Yu","doi":"10.1109/ICDCS.2009.25","DOIUrl":"https://doi.org/10.1109/ICDCS.2009.25","url":null,"abstract":"Spatial alarms are considered as one of the basic capabilities in future mobile computing systems for enabling personalization of location-based services. In this paper, we propose a distributed architecture and a suite of safe region techniques for scalable processing of spatial alarms. We show that safe region-based processing enables resource optimal distribution of partial alarm processing tasks from the server to the mobile clients. We propose three different safe region computation algorithms to explore the impact of size and shape of the safe region on network bandwidth, server load and client energy consumption. Concretely, we show that the maximum weighted perimeter rectangular safe region approach outperforms previous techniques in terms of performance and accuracy. We further explore finer granularity safe regions by introducing grid-based and pyramid-based representation of rectilinear polygonal shapes using bitmap encoding. Our experimental evaluation shows that the distributed safe region-based architecture outperforms the two most popular server-centric approaches, periodic and safe period-based, for spatial alarm processing.","PeriodicalId":387968,"journal":{"name":"2009 29th IEEE International Conference on Distributed Computing Systems","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2009-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123222918","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Though plenty of research has been conducted to improve Internet P2P streaming quality perceived by end-users, little has been known about the upper bounds of achievable performance with available resources so that different designs could compare against. On the other hand, the current practice has shown increasing demand of server capacities in P2P-assisted streaming systems in order to maintain high-quality streaming to end-users. Both research and practice call for a design that can optimally utilize available peer resources. In the paper, we first present a new design, aiming to reveal the best achievable throughput for heterogeneous P2P streaming systems. We measure the performance gaps between various designs and this optimal resource allocation. Through extensive simulations, we find out that several typical existing designs have not fully exploited the potential of system resources. However, the control overhead prohibits the adoption of this optimal approach. Then, we design a hybrid system in trading off the cost of assignment and utilization of resources. This hybrid approach has a proved theoretical bound on efficiency of utilization. Simulation results show that compared with the optimal resource allocation, our proposed hybrid design can achieve near-optimal (up to $90%$) utilization while only use much less (below $4%$) control overhead. Our results provide a basis for both server capacity planning in current P2P-assisted streaming practice and future protocol designs.
{"title":"Towards Optimal Resource Utilization in Heterogeneous P2P Streaming","authors":"Dongyu Liu, Fei Li, Songqing Chen","doi":"10.1109/ICDCS.2009.22","DOIUrl":"https://doi.org/10.1109/ICDCS.2009.22","url":null,"abstract":"Though plenty of research has been conducted to improve Internet P2P streaming quality perceived by end-users, little has been known about the upper bounds of achievable performance with available resources so that different designs could compare against. On the other hand, the current practice has shown increasing demand of server capacities in P2P-assisted streaming systems in order to maintain high-quality streaming to end-users. Both research and practice call for a design that can optimally utilize available peer resources. In the paper, we first present a new design, aiming to reveal the best achievable throughput for heterogeneous P2P streaming systems. We measure the performance gaps between various designs and this optimal resource allocation. Through extensive simulations, we find out that several typical existing designs have not fully exploited the potential of system resources. However, the control overhead prohibits the adoption of this optimal approach. Then, we design a hybrid system in trading off the cost of assignment and utilization of resources. This hybrid approach has a proved theoretical bound on efficiency of utilization. Simulation results show that compared with the optimal resource allocation, our proposed hybrid design can achieve near-optimal (up to $90%$) utilization while only use much less (below $4%$) control overhead. Our results provide a basis for both server capacity planning in current P2P-assisted streaming practice and future protocol designs.","PeriodicalId":387968,"journal":{"name":"2009 29th IEEE International Conference on Distributed Computing Systems","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2009-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123524834","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Y. Tang, Jianliang Xu, Shuigeng Zhou, Wang-Chien Lee
In this paper, we study the problem of indexing multidimensional data in the P2P networks based on distributed hash tables (DHTs). We identify several design issues and propose a novel over-DHT indexing scheme called m- LIGHT. To preserve data locality, m-LIGHT employs a clever naming mechanism that gracefully maps the index tree into the underlying DHT so that it achieves efficient index maintenance and query processing. Moreover, m- LIGHT leverages a new data-aware index splitting strategy to achieve optimal load balance among peer nodes. We conduct an extensive performance evaluation for m-LIGHT. Compared to the state-of-the-art indexing schemes, m- LIGHT substantially saves the index maintenance overhead, achieves a more balanced load distribution, and improves the range query performance in both bandwidth consumption and response latency.
{"title":"m-LIGHT: Indexing Multi-Dimensional Data over DHTs","authors":"Y. Tang, Jianliang Xu, Shuigeng Zhou, Wang-Chien Lee","doi":"10.1109/ICDCS.2009.30","DOIUrl":"https://doi.org/10.1109/ICDCS.2009.30","url":null,"abstract":"In this paper, we study the problem of indexing multidimensional data in the P2P networks based on distributed hash tables (DHTs). We identify several design issues and propose a novel over-DHT indexing scheme called m- LIGHT. To preserve data locality, m-LIGHT employs a clever naming mechanism that gracefully maps the index tree into the underlying DHT so that it achieves efficient index maintenance and query processing. Moreover, m- LIGHT leverages a new data-aware index splitting strategy to achieve optimal load balance among peer nodes. We conduct an extensive performance evaluation for m-LIGHT. Compared to the state-of-the-art indexing schemes, m- LIGHT substantially saves the index maintenance overhead, achieves a more balanced load distribution, and improves the range query performance in both bandwidth consumption and response latency.","PeriodicalId":387968,"journal":{"name":"2009 29th IEEE International Conference on Distributed Computing Systems","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2009-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130622575","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper, we consider the design of a policy-based routing system and the role that link state might play. Looking at the problem from a link-state perspective, we propose Centaur, a hybrid routing protocol combining the benefits of both link state and path vector. Through analytical and experimental studies, we demonstrate Centaur's potential in achieving rich policy expressiveness and high network availability. Our work shows that it is possible to combine link-state and path-vector approaches into a practical and efficient algorithm for policy-based routing.
{"title":"Centaur: A Hybrid Approach for Reliable Policy-Based Routing","authors":"Xin Zhang, A. Perrig, Hui Zhang","doi":"10.1109/ICDCS.2009.77","DOIUrl":"https://doi.org/10.1109/ICDCS.2009.77","url":null,"abstract":"In this paper, we consider the design of a policy-based routing system and the role that link state might play. Looking at the problem from a link-state perspective, we propose Centaur, a hybrid routing protocol combining the benefits of both link state and path vector. Through analytical and experimental studies, we demonstrate Centaur's potential in achieving rich policy expressiveness and high network availability. Our work shows that it is possible to combine link-state and path-vector approaches into a practical and efficient algorithm for policy-based routing.","PeriodicalId":387968,"journal":{"name":"2009 29th IEEE International Conference on Distributed Computing Systems","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2009-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114854950","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}