This paper addresses the problem of formalizing service level specifications (SLSs) as a first step to simplify and automate the configuration and management of multiservice IP networks. A formal representation of SLSs will allow their automatic validation and processing, fostering the dynamic negotiation of SLSs and the interoperability among service management entities. In this way, taking advantage of XML extensibility and portability, a Schema is presented describing XML SLSs sections and their contents. In addition, an XML validator tool was built to check if SLSs are correctly specified. An XML SLS for an IP telephony service is used to exemplify this proposal expressiveness.
{"title":"XML service level specification and validation","authors":"Pedro Alípio, S. R. Lima, P. Carvalho","doi":"10.1109/ISCC.2005.157","DOIUrl":"https://doi.org/10.1109/ISCC.2005.157","url":null,"abstract":"This paper addresses the problem of formalizing service level specifications (SLSs) as a first step to simplify and automate the configuration and management of multiservice IP networks. A formal representation of SLSs will allow their automatic validation and processing, fostering the dynamic negotiation of SLSs and the interoperability among service management entities. In this way, taking advantage of XML extensibility and portability, a Schema is presented describing XML SLSs sections and their contents. In addition, an XML validator tool was built to check if SLSs are correctly specified. An XML SLS for an IP telephony service is used to exemplify this proposal expressiveness.","PeriodicalId":315855,"journal":{"name":"10th IEEE Symposium on Computers and Communications (ISCC'05)","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126118171","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
P. Bellavista, Antonio Corradi, Eugenio Magistretti
There is an emerging market interest in service provisioning over dense mobile ad-hoc networks (MANETs), i.e., limited spatial regions, such as shopping malls, airports, and university campuses, where a high number of mobile wireless peers can autonomously cooperate without exploiting statically deployed network infrastructures. We claim that it is possible to exploit the high node population of dense MANETs to simplify the replication of common interest resources, in order to increase availability notwithstanding unpredictable node exits from dense regions. To this purpose, we have developed the REDMAN middleware that supports the lightweight and dense MANET-specific management, dissemination and retrieval of replicas of data/service components. In particular, the paper focuses on the presentation of different solutions for replica retrieval and for dissemination of replica placement information. We have compared and quantitatively evaluated the presented solutions by considering their ability to retrieve available replicas and their communication overhead. The original SID solution has demonstrated to outperform the others in dense MANETs and has been integrated in the REDMAN prototype.
{"title":"Comparing and evaluating lightweight solutions for replica dissemination and retrieval in dense MANETs","authors":"P. Bellavista, Antonio Corradi, Eugenio Magistretti","doi":"10.1109/ISCC.2005.42","DOIUrl":"https://doi.org/10.1109/ISCC.2005.42","url":null,"abstract":"There is an emerging market interest in service provisioning over dense mobile ad-hoc networks (MANETs), i.e., limited spatial regions, such as shopping malls, airports, and university campuses, where a high number of mobile wireless peers can autonomously cooperate without exploiting statically deployed network infrastructures. We claim that it is possible to exploit the high node population of dense MANETs to simplify the replication of common interest resources, in order to increase availability notwithstanding unpredictable node exits from dense regions. To this purpose, we have developed the REDMAN middleware that supports the lightweight and dense MANET-specific management, dissemination and retrieval of replicas of data/service components. In particular, the paper focuses on the presentation of different solutions for replica retrieval and for dissemination of replica placement information. We have compared and quantitatively evaluated the presented solutions by considering their ability to retrieve available replicas and their communication overhead. The original SID solution has demonstrated to outperform the others in dense MANETs and has been integrated in the REDMAN prototype.","PeriodicalId":315855,"journal":{"name":"10th IEEE Symposium on Computers and Communications (ISCC'05)","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129177298","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper we describe a mechanism for managing authorisation policies in distributed environments. This mechanism is based on public key infrastructure (PKI) and privilege management infrastructure (PMI). In our approach each domain comprises a root policy and some subordinate authorisation policies. The root policy specifies how to use the subordinate policies. The subordinate policies describe the access control rules that are used for making access control decisions. The subordinate policies can be defined and managed independently and dynamically loaded into the access control system at runtime. All these policies are stored in X.509 attribute certificates (ACs), thus guaranteeing their integrity. The AC that holds root policy is co-located with access control system; the ACs that holds subordinate policies can be distributed. In the root policy we use policy schemes, policy sub-schemes and policy hierarchies to manage the subordinate policies; because they make the policy management flexible and easy.
{"title":"A framework for supporting distributed access control policies","authors":"Wei Zhou, C. Meinel, V. Raja","doi":"10.1109/ISCC.2005.10","DOIUrl":"https://doi.org/10.1109/ISCC.2005.10","url":null,"abstract":"In this paper we describe a mechanism for managing authorisation policies in distributed environments. This mechanism is based on public key infrastructure (PKI) and privilege management infrastructure (PMI). In our approach each domain comprises a root policy and some subordinate authorisation policies. The root policy specifies how to use the subordinate policies. The subordinate policies describe the access control rules that are used for making access control decisions. The subordinate policies can be defined and managed independently and dynamically loaded into the access control system at runtime. All these policies are stored in X.509 attribute certificates (ACs), thus guaranteeing their integrity. The AC that holds root policy is co-located with access control system; the ACs that holds subordinate policies can be distributed. In the root policy we use policy schemes, policy sub-schemes and policy hierarchies to manage the subordinate policies; because they make the policy management flexible and easy.","PeriodicalId":315855,"journal":{"name":"10th IEEE Symposium on Computers and Communications (ISCC'05)","volume":"41 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123848373","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper we analyze a novel paradigm of reliable communications which is not based on the traditional timeout-and-retransmit mechanism of TCP. Our approach, which we call FBP (fountain based protocol), consists on using a digital fountain encoding which guarantees that duplicate packets are not possible. Using game theory, we analyze the behavior of TCP and FBP in the presence of congestion. We show that hosts using TCP have an incentive to switch to an FBP approach obtaining a higher throughput. Furthermore, we also show that a Nash equilibrium takes place when all hosts use FBP. At this equilibrium, the performance of the network is similar to the performance obtained when all hosts comply with TCP.
{"title":"A game theoretic analysis of protocols based on fountain codes","authors":"Luis López, Antonio Fernández, V. Cholvi","doi":"10.1109/ISCC.2005.11","DOIUrl":"https://doi.org/10.1109/ISCC.2005.11","url":null,"abstract":"In this paper we analyze a novel paradigm of reliable communications which is not based on the traditional timeout-and-retransmit mechanism of TCP. Our approach, which we call FBP (fountain based protocol), consists on using a digital fountain encoding which guarantees that duplicate packets are not possible. Using game theory, we analyze the behavior of TCP and FBP in the presence of congestion. We show that hosts using TCP have an incentive to switch to an FBP approach obtaining a higher throughput. Furthermore, we also show that a Nash equilibrium takes place when all hosts use FBP. At this equilibrium, the performance of the network is similar to the performance obtained when all hosts comply with TCP.","PeriodicalId":315855,"journal":{"name":"10th IEEE Symposium on Computers and Communications (ISCC'05)","volume":"99 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124098288","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We develop fast algorithms to construct space-optimal constrained two-dimensional multibit tries for Internet packet classifier applications. Experimental evidence suggests that using the same memory budget, space-optimal two-dimensional multibit tries require 1/4 to 1/3 the memory accesses required by two-dimensional one-bit tries for table lookup.
{"title":"Packet classification using two-dimensional multibit tries","authors":"Wencheng Lu, S. Sahni","doi":"10.1109/ISCC.2005.118","DOIUrl":"https://doi.org/10.1109/ISCC.2005.118","url":null,"abstract":"We develop fast algorithms to construct space-optimal constrained two-dimensional multibit tries for Internet packet classifier applications. Experimental evidence suggests that using the same memory budget, space-optimal two-dimensional multibit tries require 1/4 to 1/3 the memory accesses required by two-dimensional one-bit tries for table lookup.","PeriodicalId":315855,"journal":{"name":"10th IEEE Symposium on Computers and Communications (ISCC'05)","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121193719","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Web sites attracting a high client-traffic cannot simply rely on either mirrored servers or a single server to balance the client-request generated load. DNS load balancing techniques have shown their advantages in dealing with heavy Web traffic. These techniques use the time-to-live (TTL) value associated with a name-to-address translation. Unfortunately name-to-address translations are cached in intermediate name servers for a period defined by the TTL This results in all requests reaching the same Web server for this TTL period. The proposed adaptive-TTL approach (called DLB-TS - dynamic load balancing based on task size) takes into account the time taken to fetch a document while choosing the least loaded server. To alleviate the problems of client-side caching and non-cooperative intermediate name servers, a server-side redirection is proposed and implemented. Though the algorithm caused degraded performance because of server-side load balancing under a light load, it reduced the client perceived latency by at least 16% when compared to existing size-based algorithms.
{"title":"A task-based adaptive TTL approach for Web server load balancing","authors":"Devarshi Chatterjee, Z. Tari, Albert Y. Zomaya","doi":"10.1109/ISCC.2005.19","DOIUrl":"https://doi.org/10.1109/ISCC.2005.19","url":null,"abstract":"Web sites attracting a high client-traffic cannot simply rely on either mirrored servers or a single server to balance the client-request generated load. DNS load balancing techniques have shown their advantages in dealing with heavy Web traffic. These techniques use the time-to-live (TTL) value associated with a name-to-address translation. Unfortunately name-to-address translations are cached in intermediate name servers for a period defined by the TTL This results in all requests reaching the same Web server for this TTL period. The proposed adaptive-TTL approach (called DLB-TS - dynamic load balancing based on task size) takes into account the time taken to fetch a document while choosing the least loaded server. To alleviate the problems of client-side caching and non-cooperative intermediate name servers, a server-side redirection is proposed and implemented. Though the algorithm caused degraded performance because of server-side load balancing under a light load, it reduced the client perceived latency by at least 16% when compared to existing size-based algorithms.","PeriodicalId":315855,"journal":{"name":"10th IEEE Symposium on Computers and Communications (ISCC'05)","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125456251","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Recent demands for new applications are giving rise to an increasing need of quality of service (QoS). Nowadays, most IP-based networks tend to use the DiffServ architecture to provide end-to-end QoS. Traffic conditioners are a key element in the deployment of DiffServ. In this paper, we introduce a new approach for traffic conditioning based on feedback signaling among boundary nodes and traffic conditioners. This new approach is intended to provide a proportional distribution of excess bandwidth to end-users. We evaluate through extensive simulations the performance of our proposal in terms of final throughput, considering contracted target rates and distribution of spare bandwidth. Results show a high level of fairness in the excess bandwidth allocation among TCP sources under different network conditions.
{"title":"Proportional bandwidth distribution in IP networks implementing the assured forwarding PHB","authors":"M. Cano, F. Cerdán","doi":"10.1109/ISCC.2005.128","DOIUrl":"https://doi.org/10.1109/ISCC.2005.128","url":null,"abstract":"Recent demands for new applications are giving rise to an increasing need of quality of service (QoS). Nowadays, most IP-based networks tend to use the DiffServ architecture to provide end-to-end QoS. Traffic conditioners are a key element in the deployment of DiffServ. In this paper, we introduce a new approach for traffic conditioning based on feedback signaling among boundary nodes and traffic conditioners. This new approach is intended to provide a proportional distribution of excess bandwidth to end-users. We evaluate through extensive simulations the performance of our proposal in terms of final throughput, considering contracted target rates and distribution of spare bandwidth. Results show a high level of fairness in the excess bandwidth allocation among TCP sources under different network conditions.","PeriodicalId":315855,"journal":{"name":"10th IEEE Symposium on Computers and Communications (ISCC'05)","volume":"111 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128096682","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In order to achieve better scalability, inter-domain signaling protocols rely on aggregation to reduce the amount of state information that routers are required to maintain. Nonetheless, they do not address another scalability key factor, the signaling load associated with establishing and maintaining reservations. Such load can be reduced if bandwidth is over-reserved. Over-reservation allows accommodating reservations without exchanging signaling messages, but may result in additional blocking. In this paper, we carry out a systematic investigation of the impact of over-reservation in different aggregation approaches, evaluating such impact in terms of the achieved signaling reduction, and blocking.
{"title":"Enabling scalable inter-AS signaling: a load reduction approach","authors":"Rute C. Sofia, R. Guérin, P. Veiga","doi":"10.1109/ISCC.2005.64","DOIUrl":"https://doi.org/10.1109/ISCC.2005.64","url":null,"abstract":"In order to achieve better scalability, inter-domain signaling protocols rely on aggregation to reduce the amount of state information that routers are required to maintain. Nonetheless, they do not address another scalability key factor, the signaling load associated with establishing and maintaining reservations. Such load can be reduced if bandwidth is over-reserved. Over-reservation allows accommodating reservations without exchanging signaling messages, but may result in additional blocking. In this paper, we carry out a systematic investigation of the impact of over-reservation in different aggregation approaches, evaluating such impact in terms of the achieved signaling reduction, and blocking.","PeriodicalId":315855,"journal":{"name":"10th IEEE Symposium on Computers and Communications (ISCC'05)","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122700194","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Content delivery networks (CDNs) provide an efficient support for serving http and streaming media content white minimizing the network impact of content delivery as well as overcoming the server overload problem. For serving the large documents and media files, there is an additional problem of the original content distribution across the CDN edge servers. We propose an algorithm, called ALM-fastreplica, for optimizing replication of large files across the edge servers in CDNs. The original file is partitioned into k subfiles, and each subfile is replicated via a correspondingly constructed multicast tree. Nodes from the different multicast trees use additional cross-nodes connections to exchange their corresponding subfiles such that each node eventually receives an entire file. This new replication method significantly reduces file replication time, up to 5-15 times compared to the traditional unicast (or point-to-point) schema. Since a single node failure in the multicast tree during the file distribution may impact the file delivery to a significant number of nodes, it is important to design an algorithm which is able to deal with node failures. We augment ALM-FastReplica with an efficient reliability mechanism, that can deal with node failures by making local repair decisions within a particular replication group of nodes. Under the proposed algorithm, the load of the failed node is shared among the nodes of the corresponding replication group, making the performance degradation gradual.
{"title":"Optimizing the reliable distribution of large files within CDNs","authors":"L. Cherkasova","doi":"10.1109/ISCC.2005.116","DOIUrl":"https://doi.org/10.1109/ISCC.2005.116","url":null,"abstract":"Content delivery networks (CDNs) provide an efficient support for serving http and streaming media content white minimizing the network impact of content delivery as well as overcoming the server overload problem. For serving the large documents and media files, there is an additional problem of the original content distribution across the CDN edge servers. We propose an algorithm, called ALM-fastreplica, for optimizing replication of large files across the edge servers in CDNs. The original file is partitioned into k subfiles, and each subfile is replicated via a correspondingly constructed multicast tree. Nodes from the different multicast trees use additional cross-nodes connections to exchange their corresponding subfiles such that each node eventually receives an entire file. This new replication method significantly reduces file replication time, up to 5-15 times compared to the traditional unicast (or point-to-point) schema. Since a single node failure in the multicast tree during the file distribution may impact the file delivery to a significant number of nodes, it is important to design an algorithm which is able to deal with node failures. We augment ALM-FastReplica with an efficient reliability mechanism, that can deal with node failures by making local repair decisions within a particular replication group of nodes. Under the proposed algorithm, the load of the failed node is shared among the nodes of the corresponding replication group, making the performance degradation gradual.","PeriodicalId":315855,"journal":{"name":"10th IEEE Symposium on Computers and Communications (ISCC'05)","volume":"54 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123813053","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper we investigate the impact of a distributed power control (DPC) technique over two multicast routing protocols, AMRIS and ODMRP. Such routing schemes are tested with 802.11 and 802.11b MAC protocols. The aim of this work is to investigate the influence of different MAC solutions, the effective reduction in terms of power consumption allowed by the DPC mechanism, and also to check the not excessive performance degradation for the other figures of merit, like packet delivery fraction and average delay. Comparisons between the various solutions are performed through an extensive simulation study.
{"title":"On the impact of distributed power control over multicast routing protocols","authors":"C. Taddia, A. Giovanardi, G. Mazzini","doi":"10.1109/ISCC.2005.113","DOIUrl":"https://doi.org/10.1109/ISCC.2005.113","url":null,"abstract":"In this paper we investigate the impact of a distributed power control (DPC) technique over two multicast routing protocols, AMRIS and ODMRP. Such routing schemes are tested with 802.11 and 802.11b MAC protocols. The aim of this work is to investigate the influence of different MAC solutions, the effective reduction in terms of power consumption allowed by the DPC mechanism, and also to check the not excessive performance degradation for the other figures of merit, like packet delivery fraction and average delay. Comparisons between the various solutions are performed through an extensive simulation study.","PeriodicalId":315855,"journal":{"name":"10th IEEE Symposium on Computers and Communications (ISCC'05)","volume":"14 11","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120856714","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}