Pub Date : 2009-06-01DOI: 10.1109/INM.2009.5188822
Ruzana Davoyan, J. Altmann
This paper investigates the impact of determination of an original initiator of transmission on demand as well as profits of the providers. For that purpose we present a new model, called differentiated traffic-based interconnection agreement (DTIA) that differentiates traffic into two types, referred to as native and stranger in order to determine a transmission initiator. In comparison to the existing financial settlement, under which the payments are based on the net traffic flow, the proposed model governs cost compensation according to the differentiated traffic flows. In addition, a traffic management mechanism that supports the presented approach was described. Analytical studies were provided using Nash bargaining solution to explore how the proposed strategy affects the outcome of providers' negotiation. The key consequence of the obtained results showed that determination of an initiator of transmission induces providers to receive higher profits.
{"title":"Investigating the role of a transmission initiator in private peering arrangements","authors":"Ruzana Davoyan, J. Altmann","doi":"10.1109/INM.2009.5188822","DOIUrl":"https://doi.org/10.1109/INM.2009.5188822","url":null,"abstract":"This paper investigates the impact of determination of an original initiator of transmission on demand as well as profits of the providers. For that purpose we present a new model, called differentiated traffic-based interconnection agreement (DTIA) that differentiates traffic into two types, referred to as native and stranger in order to determine a transmission initiator. In comparison to the existing financial settlement, under which the payments are based on the net traffic flow, the proposed model governs cost compensation according to the differentiated traffic flows. In addition, a traffic management mechanism that supports the presented approach was described. Analytical studies were provided using Nash bargaining solution to explore how the proposed strategy affects the outcome of providers' negotiation. The key consequence of the obtained results showed that determination of an initiator of transmission induces providers to receive higher profits.","PeriodicalId":332206,"journal":{"name":"2009 IFIP/IEEE International Symposium on Integrated Network Management","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131984341","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2009-06-01DOI: 10.1109/INM.2009.5188836
A. Prieto, R. Stadler
A key requirement for autonomic (i.e., self-*) management systems is a short adaptation time to changes in the networking conditions. In this paper, we show that the adaptation time of a distributed monitoring protocol can be controlled. We show this for A-GAP, a protocol for continuous monitoring of global metrics with controllable accuracy. We demonstrate through simulations that, for the case of A-GAP, the choice of the topology of the aggregation tree controls the trade-off between adaptation time and protocol overhead in steady-state. Generally, allowing a larger adaptation time permits reducing the protocol overhead. Our results suggest that the adaptation time primarily depends on the height of the aggregation tree and that the protocol overhead is strongly influenced by the number of internal nodes. We outline how A-GAP can be extended to dynamically self-configure and to continuously adapt its configuration to changing conditions, in order to meet a set of performance objectives, including adaptation time, protocol overhead, and estimation accuracy.
{"title":"Controlling performance trade-offs in adaptive network monitoring","authors":"A. Prieto, R. Stadler","doi":"10.1109/INM.2009.5188836","DOIUrl":"https://doi.org/10.1109/INM.2009.5188836","url":null,"abstract":"A key requirement for autonomic (i.e., self-*) management systems is a short adaptation time to changes in the networking conditions. In this paper, we show that the adaptation time of a distributed monitoring protocol can be controlled. We show this for A-GAP, a protocol for continuous monitoring of global metrics with controllable accuracy. We demonstrate through simulations that, for the case of A-GAP, the choice of the topology of the aggregation tree controls the trade-off between adaptation time and protocol overhead in steady-state. Generally, allowing a larger adaptation time permits reducing the protocol overhead. Our results suggest that the adaptation time primarily depends on the height of the aggregation tree and that the protocol overhead is strongly influenced by the number of internal nodes. We outline how A-GAP can be extended to dynamically self-configure and to continuously adapt its configuration to changing conditions, in order to meet a set of performance objectives, including adaptation time, protocol overhead, and estimation accuracy.","PeriodicalId":332206,"journal":{"name":"2009 IFIP/IEEE International Symposium on Integrated Network Management","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130758292","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2009-06-01DOI: 10.1109/INM.2009.5188787
Hahnsang Kim, K. Shin
While mobile nodes (MNs) undergo handovers across inter-wireless access networks, their contexts must be propagated for seamless re-establishment of on-going application sessions, including IP header compression, secure Mobile IP, authentication, authorization, and accounting services, to name a few. Routing contexts via an overlay network either on-demand or based on prediction of an MNs' mobility, introduces a new challenging requirement of context management. This paper proposes a context router (CXR) that manages contexts in an overlay network. A CXR is responsible for (1) monitoring of MNs' cross-handover, (2) analysis of MNs' movement patterns, and (3) context routing ahead of each MN's arrival at an AP or a network. The predictive routing of contexts is performed based on statistical learning of (dis)similarities between the patterns obtained from vector distance measurements. The proposed CXR has been evaluated on a prototypical implementation based on an MN mobility model in an emulated access network. Our evaluation results show that the prediction mechanisms applied on the CXR outperform a Kalman-filter-based method [34] with respect to both prediction accuracy and computation performance.
{"title":"Predictive routing of contexts in an overlay network","authors":"Hahnsang Kim, K. Shin","doi":"10.1109/INM.2009.5188787","DOIUrl":"https://doi.org/10.1109/INM.2009.5188787","url":null,"abstract":"While mobile nodes (MNs) undergo handovers across inter-wireless access networks, their contexts must be propagated for seamless re-establishment of on-going application sessions, including IP header compression, secure Mobile IP, authentication, authorization, and accounting services, to name a few. Routing contexts via an overlay network either on-demand or based on prediction of an MNs' mobility, introduces a new challenging requirement of context management. This paper proposes a context router (CXR) that manages contexts in an overlay network. A CXR is responsible for (1) monitoring of MNs' cross-handover, (2) analysis of MNs' movement patterns, and (3) context routing ahead of each MN's arrival at an AP or a network. The predictive routing of contexts is performed based on statistical learning of (dis)similarities between the patterns obtained from vector distance measurements. The proposed CXR has been evaluated on a prototypical implementation based on an MN mobility model in an emulated access network. Our evaluation results show that the prediction mechanisms applied on the CXR outperform a Kalman-filter-based method [34] with respect to both prediction accuracy and computation performance.","PeriodicalId":332206,"journal":{"name":"2009 IFIP/IEEE International Symposium on Integrated Network Management","volume":"64 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130538546","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2009-06-01DOI: 10.1109/INM.2009.5188868
M. N. Lima, G. Pujolle, E. D. Silva, A. Santos, L. Albini
Cryptographic techniques are at the center of security solutions for wireless ad hoc networks. Public key infrastructures (PKIs) are essential for their efficient operation. However, the fully distributed organization of these networks makes a challenge to design PKIs. Moreover, changes in network paradigms and the increasing dependency on technology require more dependable, survivable and scalable PKIs. This paper presents a survivable PKI whose goal is to preserve key management operations even in face of attacks or intrusions. Our PKI is based on the adaptive cooperation among preventive, reactive and tolerant defense lines. It employs different evidences to prove the liability of users for their keys as well as social relationships for helping public key exchanges. Simulation results show the improvements achieved by our proposal in terms of effectiveness and survivability to different attacks.
{"title":"Survivable keying for wireless ad hoc networks","authors":"M. N. Lima, G. Pujolle, E. D. Silva, A. Santos, L. Albini","doi":"10.1109/INM.2009.5188868","DOIUrl":"https://doi.org/10.1109/INM.2009.5188868","url":null,"abstract":"Cryptographic techniques are at the center of security solutions for wireless ad hoc networks. Public key infrastructures (PKIs) are essential for their efficient operation. However, the fully distributed organization of these networks makes a challenge to design PKIs. Moreover, changes in network paradigms and the increasing dependency on technology require more dependable, survivable and scalable PKIs. This paper presents a survivable PKI whose goal is to preserve key management operations even in face of attacks or intrusions. Our PKI is based on the adaptive cooperation among preventive, reactive and tolerant defense lines. It employs different evidences to prove the liability of users for their keys as well as social relationships for helping public key exchanges. Simulation results show the improvements achieved by our proposal in terms of effectiveness and survivability to different attacks.","PeriodicalId":332206,"journal":{"name":"2009 IFIP/IEEE International Symposium on Integrated Network Management","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116250146","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2009-06-01DOI: 10.1109/INM.2009.5188860
A. Mykkeltveit, B. Helvik
Today's backbone communication networks serve a wide range of services with different availability requirements. Each customer has a contract, denoted a Service Level Agreement (SLA) which specifies the availability requirement over the contract period. In the literature, different provisioning strategies to establish connection arrangements capable of meeting a statistical asymptotic availability for the different customers have been proposed. In reality, the SLAs specify guarantees on the interval availability which may deviate significantly from the asymptotic availability. This paper proposes to use an adaptive strategy to manage which connections are affected by failures and maximize the compliance with the SLAs. Different policies for management of connections from the same class with equal requirements and connections with different requirements are proposed. These policies are evaluated and compared with the traditional provisioning policies in a simulation study. The results show that adaptive management can significantly reduce the risk of violating the SLAs in several scenarios.
{"title":"Adaptive management of connections to meet availability guarantees in SLAs","authors":"A. Mykkeltveit, B. Helvik","doi":"10.1109/INM.2009.5188860","DOIUrl":"https://doi.org/10.1109/INM.2009.5188860","url":null,"abstract":"Today's backbone communication networks serve a wide range of services with different availability requirements. Each customer has a contract, denoted a Service Level Agreement (SLA) which specifies the availability requirement over the contract period. In the literature, different provisioning strategies to establish connection arrangements capable of meeting a statistical asymptotic availability for the different customers have been proposed. In reality, the SLAs specify guarantees on the interval availability which may deviate significantly from the asymptotic availability. This paper proposes to use an adaptive strategy to manage which connections are affected by failures and maximize the compliance with the SLAs. Different policies for management of connections from the same class with equal requirements and connections with different requirements are proposed. These policies are evaluated and compared with the traditional provisioning policies in a simulation study. The results show that adaptive management can significantly reduce the risk of violating the SLAs in several scenarios.","PeriodicalId":332206,"journal":{"name":"2009 IFIP/IEEE International Symposium on Integrated Network Management","volume":"99 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123663542","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2009-06-01DOI: 10.1109/INM.2009.5188883
Songjie Wei, S. Mazumdar
Grid computing, as a technology to coordinate loosely-coupled computing resources for dynamic virtual organizations, has become prevalent in both industry and academia in the past decade. While providing or utilizing heterogeneous and distributed grids, people can never alleviate their security concerns on the resources and data. Globus Toolkit as an open-source grid environment has implemented the public key infrastructure (PKI) and extended it for proxy-certificate-based delegation propagation with a series of separate and command-line-based components and services. We have built an integrated web service system to coordinate all of Globus's components and services that are needed for user credential management. Our system can reduce the necessary operations on creating and maintaining user credentials in Globus. The system also simplifies the procedure of deploying or accessing Globus services for user authentication, authorization, and identity and authority delegation. We provide a light-weighted Mozilla Firefox add-on on the client side to interact with our online system. On the server side, we implement web services for CA functionality, VOMS attribute certificate generation, and proxy delegation and retrieval, which satisfy the typical needs of most Globus users. Although our current solution is designed for integrating and automating all the credential-related operations for Globus users, it is portable for other online service platforms using similar PKI and delegation mechanisms.
{"title":"Web-based administration of grid credentials for identity and authority delegation","authors":"Songjie Wei, S. Mazumdar","doi":"10.1109/INM.2009.5188883","DOIUrl":"https://doi.org/10.1109/INM.2009.5188883","url":null,"abstract":"Grid computing, as a technology to coordinate loosely-coupled computing resources for dynamic virtual organizations, has become prevalent in both industry and academia in the past decade. While providing or utilizing heterogeneous and distributed grids, people can never alleviate their security concerns on the resources and data. Globus Toolkit as an open-source grid environment has implemented the public key infrastructure (PKI) and extended it for proxy-certificate-based delegation propagation with a series of separate and command-line-based components and services. We have built an integrated web service system to coordinate all of Globus's components and services that are needed for user credential management. Our system can reduce the necessary operations on creating and maintaining user credentials in Globus. The system also simplifies the procedure of deploying or accessing Globus services for user authentication, authorization, and identity and authority delegation. We provide a light-weighted Mozilla Firefox add-on on the client side to interact with our online system. On the server side, we implement web services for CA functionality, VOMS attribute certificate generation, and proxy delegation and retrieval, which satisfy the typical needs of most Globus users. Although our current solution is designed for integrating and automating all the credential-related operations for Globus users, it is portable for other online service platforms using similar PKI and delegation mechanisms.","PeriodicalId":332206,"journal":{"name":"2009 IFIP/IEEE International Symposium on Integrated Network Management","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121089003","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2009-06-01DOI: 10.1109/INM.2009.5188881
V. Talwar, K. Nahrstedt, D. Milojicic
A remote desktop utility system is an emerging client/server networked model for enterprise desktops. In this model, a shared pool of consolidated compute and storage servers host users' desktop applications and data respectively. End-users are allocated resources for a desktop session from the shared pool on-demand, and they interact with their applications over the network using remote display technologies. Understanding the detailed behavior of applications in these remote desktop utilities is crucial for more effective QoS management. However, there are challenges due to hard-to-predict workloads, complexity, and scale. In this paper, we present a detailed modeling of a remote desktop system through case study of an Office application — email. The characterization provides insights into workload and user model, the effect of remote display technology, and implications of shared infrastructure. We then apply these learnings and modeling results for improved QoS resource management decisions — achieving over 90% improvement compared to state of the art allocation mechanisms. We also present discussion on generalizing a methodology for a broader applicability of model-driven resource management.
{"title":"Modeling remote desktop systems in utility environments with application to QoS management","authors":"V. Talwar, K. Nahrstedt, D. Milojicic","doi":"10.1109/INM.2009.5188881","DOIUrl":"https://doi.org/10.1109/INM.2009.5188881","url":null,"abstract":"A remote desktop utility system is an emerging client/server networked model for enterprise desktops. In this model, a shared pool of consolidated compute and storage servers host users' desktop applications and data respectively. End-users are allocated resources for a desktop session from the shared pool on-demand, and they interact with their applications over the network using remote display technologies. Understanding the detailed behavior of applications in these remote desktop utilities is crucial for more effective QoS management. However, there are challenges due to hard-to-predict workloads, complexity, and scale. In this paper, we present a detailed modeling of a remote desktop system through case study of an Office application — email. The characterization provides insights into workload and user model, the effect of remote display technology, and implications of shared infrastructure. We then apply these learnings and modeling results for improved QoS resource management decisions — achieving over 90% improvement compared to state of the art allocation mechanisms. We also present discussion on generalizing a methodology for a broader applicability of model-driven resource management.","PeriodicalId":332206,"journal":{"name":"2009 IFIP/IEEE International Symposium on Integrated Network Management","volume":"217 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122520547","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2009-06-01DOI: 10.1109/INM.2009.5188848
Sheila Becker, R. State, T. Engel
This paper proposes a new model, based on mainstream game theory for the optimal configuration of services. We consider the case of reliable realtime P2P communications and show how the configuration of security mechanisms can be configured using game theoretical concepts, in which the defendant is played by the management plane having to face adversaries which play the attacker role. Our main contribution lies in proposing a risk assessment framework and deriving optimal strategies - in terms of Nash equilibrium - for both the attacker and the defendant. We consider the specific service of communications in autonomic networks and we show how the optimal configuration can be determined within the proposed framework.
{"title":"Defensive configuration with game theory","authors":"Sheila Becker, R. State, T. Engel","doi":"10.1109/INM.2009.5188848","DOIUrl":"https://doi.org/10.1109/INM.2009.5188848","url":null,"abstract":"This paper proposes a new model, based on mainstream game theory for the optimal configuration of services. We consider the case of reliable realtime P2P communications and show how the configuration of security mechanisms can be configured using game theoretical concepts, in which the defendant is played by the management plane having to face adversaries which play the attacker role. Our main contribution lies in proposing a risk assessment framework and deriving optimal strategies - in terms of Nash equilibrium - for both the attacker and the defendant. We consider the specific service of communications in autonomic networks and we show how the optimal configuration can be determined within the proposed framework.","PeriodicalId":332206,"journal":{"name":"2009 IFIP/IEEE International Symposium on Integrated Network Management","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128783849","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2009-06-01DOI: 10.1109/INM.2009.5188870
J. Rhee, Andrzej Kochut, K. Beaty
The thin-client computing model has been recently regaining popularity in a new form known as the virtual desktop. That is where the desktop is hosted on a virtualized platform. Even though the interest in this computing paradigm is broad there are relatively few tools and methods for benchmarking virtual client infrastructures. We believe that developing such tools and approaches is crucial for the future success of virtual client deployments and also for objective evaluation of existing and new algorithms, communication protocols, and technologies. We present DeskBench, a virtual desktop benchmarking tool, that allows for fast and easy creation of benchmarks by simple recording of the user's activity. It also allows for replaying the recorded actions in a synchronized manner at maximum possible speeds without compromising the correctness of the replay. The proposed approach relies only on the basic primitives of mouse and keyboard events as well as screen region updates which are common in window manager systems. We have implemented a prototype of the system and also conducted a series of experiments measuring responsiveness of virtual machine based desktops under various load conditions and network latencies. The experiments illustrate the flexibility and accuracy of the proposed method and also give some interesting insights into the scalability of virtual machine based desktops.
{"title":"DeskBench: Flexible virtual desktop benchmarking toolkit","authors":"J. Rhee, Andrzej Kochut, K. Beaty","doi":"10.1109/INM.2009.5188870","DOIUrl":"https://doi.org/10.1109/INM.2009.5188870","url":null,"abstract":"The thin-client computing model has been recently regaining popularity in a new form known as the virtual desktop. That is where the desktop is hosted on a virtualized platform. Even though the interest in this computing paradigm is broad there are relatively few tools and methods for benchmarking virtual client infrastructures. We believe that developing such tools and approaches is crucial for the future success of virtual client deployments and also for objective evaluation of existing and new algorithms, communication protocols, and technologies. We present DeskBench, a virtual desktop benchmarking tool, that allows for fast and easy creation of benchmarks by simple recording of the user's activity. It also allows for replaying the recorded actions in a synchronized manner at maximum possible speeds without compromising the correctness of the replay. The proposed approach relies only on the basic primitives of mouse and keyboard events as well as screen region updates which are common in window manager systems. We have implemented a prototype of the system and also conducted a series of experiments measuring responsiveness of virtual machine based desktops under various load conditions and network latencies. The experiments illustrate the flexibility and accuracy of the proposed method and also give some interesting insights into the scalability of virtual machine based desktops.","PeriodicalId":332206,"journal":{"name":"2009 IFIP/IEEE International Symposium on Integrated Network Management","volume":"41 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128825375","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2009-06-01DOI: 10.1109/INM.2009.5188830
Sergio de Oliveira, Thiago Rodrigues de Oliveira, J. Nogueira
Wireless sensor networks are subjected to several types of attacks specially attacks of denial of service types (DoS). Several mechanisms and techniques were proposed to provide security to wireless sensor networks, like cryptographic process, key management protocols, intrusion detection systems, node revocation schemas, secure routing, and secure data fusion. A recent work proposes a security management framework to dynamically configure and reconfigure security components in sensor networks according to management information collected by sensor nodes and sent to decision-maker management entities. It turns on or off security components only when they are necessary, saving power and extend network lifetime. The architecture is policy based, what enable rules configuration specific for each application. We evaluate that security management framework, showing possibilities to save power and how that work can contribute to extend network lifetime. We propose some scenarios to evaluate the performance of the security management framework and estimate the cost of security components.
{"title":"A policy based security management architecture for sensor networks","authors":"Sergio de Oliveira, Thiago Rodrigues de Oliveira, J. Nogueira","doi":"10.1109/INM.2009.5188830","DOIUrl":"https://doi.org/10.1109/INM.2009.5188830","url":null,"abstract":"Wireless sensor networks are subjected to several types of attacks specially attacks of denial of service types (DoS). Several mechanisms and techniques were proposed to provide security to wireless sensor networks, like cryptographic process, key management protocols, intrusion detection systems, node revocation schemas, secure routing, and secure data fusion. A recent work proposes a security management framework to dynamically configure and reconfigure security components in sensor networks according to management information collected by sensor nodes and sent to decision-maker management entities. It turns on or off security components only when they are necessary, saving power and extend network lifetime. The architecture is policy based, what enable rules configuration specific for each application. We evaluate that security management framework, showing possibilities to save power and how that work can contribute to extend network lifetime. We propose some scenarios to evaluate the performance of the security management framework and estimate the cost of security components.","PeriodicalId":332206,"journal":{"name":"2009 IFIP/IEEE International Symposium on Integrated Network Management","volume":"59 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125842546","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}