In Delay Tolerant Networks (DTN), as disconnections between nodes are frequent, establishing routing path from the source node to the destination node may not be possible. However, if a node transmits packets to all its encounters, its batteries will be used up quickly. Many researches have been done on routing and forwarding algorithms in DTN, but few of them have explicitly address the energy issue. In this paper, we propose n-epidemic routing protocol, an energy-efficient routing protocol for DTN. The n-epidemic routing protocol is based on the reasoning that in order to reach a large audiences with low number of transmissions, it is better to transmit only when the number of neighbors reaching a certain threshold. We compare the delivery performance of n-epidemic routing protocol with basic epidemic routing protocol using both analytical approach and empirical approach with real experimental dataset. The experiment shows that n-epidemic routing protocol can increase the delivery performance of basic epidemic-routing by 434% averagely.
{"title":"An Energy-Efficient n-Epidemic Routing Protocol for Delay Tolerant Networks","authors":"Xiaofeng Lu, P. Hui","doi":"10.1109/NAS.2010.46","DOIUrl":"https://doi.org/10.1109/NAS.2010.46","url":null,"abstract":"In Delay Tolerant Networks (DTN), as disconnections between nodes are frequent, establishing routing path from the source node to the destination node may not be possible. However, if a node transmits packets to all its encounters, its batteries will be used up quickly. Many researches have been done on routing and forwarding algorithms in DTN, but few of them have explicitly address the energy issue. In this paper, we propose n-epidemic routing protocol, an energy-efficient routing protocol for DTN. The n-epidemic routing protocol is based on the reasoning that in order to reach a large audiences with low number of transmissions, it is better to transmit only when the number of neighbors reaching a certain threshold. We compare the delivery performance of n-epidemic routing protocol with basic epidemic routing protocol using both analytical approach and empirical approach with real experimental dataset. The experiment shows that n-epidemic routing protocol can increase the delivery performance of basic epidemic-routing by 434% averagely.","PeriodicalId":284549,"journal":{"name":"2010 IEEE Fifth International Conference on Networking, Architecture, and Storage","volume":"45 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-07-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121735766","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Next generation networks anticipate an increasing amount of network traffic from a wide range of emerging network applications. The features of packet flows (such as the minimal packet inter-arrival time and the number of packets with non-zero options in TCP headers) are used frequently in determining the traffic type and applying security policies. However, the extraction of flow features is difficult due to the increasing line rates, a broad range of network protocols, and a variety of complex flow features. In this paper, we leverage the multi-core processors to speed up the feature extraction process. We design an open source parallel software tool, aiming for processing network packet flows in real-time. We implement the software in four different designs including serial, parallel, pipelined and hybrid architectures. We evaluate the performance of the parallel software tool through measurement experiments. Our experimental results show that each method increases the packet processing throughput by 5-7% in comparison with the previous method. And finally the implementation based on the hybrid architecture improves the packet processing performance by 19.3% than the implementation based on the serial architecture.
{"title":"High Performance Flow Feature Extraction with Multi-core Processors","authors":"Sanping Li, Yan Luo","doi":"10.1109/NAS.2010.36","DOIUrl":"https://doi.org/10.1109/NAS.2010.36","url":null,"abstract":"Next generation networks anticipate an increasing amount of network traffic from a wide range of emerging network applications. The features of packet flows (such as the minimal packet inter-arrival time and the number of packets with non-zero options in TCP headers) are used frequently in determining the traffic type and applying security policies. However, the extraction of flow features is difficult due to the increasing line rates, a broad range of network protocols, and a variety of complex flow features. In this paper, we leverage the multi-core processors to speed up the feature extraction process. We design an open source parallel software tool, aiming for processing network packet flows in real-time. We implement the software in four different designs including serial, parallel, pipelined and hybrid architectures. We evaluate the performance of the parallel software tool through measurement experiments. Our experimental results show that each method increases the packet processing throughput by 5-7% in comparison with the previous method. And finally the implementation based on the hybrid architecture improves the packet processing performance by 19.3% than the implementation based on the serial architecture.","PeriodicalId":284549,"journal":{"name":"2010 IEEE Fifth International Conference on Networking, Architecture, and Storage","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-07-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123827306","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In greedy routing, each relay node forwards the message to a neighbor (also called successor) that is closer to the destination. However, the successor candidate set (SCS) is different every time when the relative location of relay node to the destination changes. The configuration in the entire network when all the succeeding paths from a relay node are blocked by local minima is irregular and its concern region cannot be determined unless the routing actually initiates. In this paper, we introduce a new information model to determine the pattern of SCS under the impact of local minima by sacrificing little routing flexibility. As a result, each 1-hop advance can avoid those unsafe situations in order to stay along a non-detour path. In our model, each node prepares the information in a proactive model, but can use it for all different paths passing through, saving the cost and delay in the reactive model. We focus on an ``everyone" model, in which each node will apply the same generic process in a fully distributed manner, in order to achieve a reliable solution in real applications where the communication link is constituted irregularly and its quality changes dynamically. In details, we discuss how in a sample realistic environment the pattern of SCS can be interpreted in a single safety descriptor in [0,1] at each node. It indicates the maximum probability of a successful non-detour path from this node to the edge of networks. The larger value the more likely the non-detour routing will be successful and the more reliable the path will be. We illustrate the effectiveness of this indirect reference information in the corresponding routing, in terms of the cost of information construction and update propagation, and the success of non-detour path constitution, compared with the best results known to date.
{"title":"CR: Capability Information for Routing of Wireless Ad Hoc Networks in the Real Environment","authors":"Zhen Jiang, Zhigang Li, Nong Xiao, Jie Wu","doi":"10.1109/NAS.2010.31","DOIUrl":"https://doi.org/10.1109/NAS.2010.31","url":null,"abstract":"In greedy routing, each relay node forwards the message to a neighbor (also called successor) that is closer to the destination. However, the successor candidate set (SCS) is different every time when the relative location of relay node to the destination changes. The configuration in the entire network when all the succeeding paths from a relay node are blocked by local minima is irregular and its concern region cannot be determined unless the routing actually initiates. In this paper, we introduce a new information model to determine the pattern of SCS under the impact of local minima by sacrificing little routing flexibility. As a result, each 1-hop advance can avoid those unsafe situations in order to stay along a non-detour path. In our model, each node prepares the information in a proactive model, but can use it for all different paths passing through, saving the cost and delay in the reactive model. We focus on an ``everyone\" model, in which each node will apply the same generic process in a fully distributed manner, in order to achieve a reliable solution in real applications where the communication link is constituted irregularly and its quality changes dynamically. In details, we discuss how in a sample realistic environment the pattern of SCS can be interpreted in a single safety descriptor in [0,1] at each node. It indicates the maximum probability of a successful non-detour path from this node to the edge of networks. The larger value the more likely the non-detour routing will be successful and the more reliable the path will be. We illustrate the effectiveness of this indirect reference information in the corresponding routing, in terms of the cost of information construction and update propagation, and the success of non-detour path constitution, compared with the best results known to date.","PeriodicalId":284549,"journal":{"name":"2010 IEEE Fifth International Conference on Networking, Architecture, and Storage","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-07-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129953944","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Solid-state disks (SSDs) with high I/O performance are increasingly becoming popular. To extend the life time of flash memory, one can apply wear-leveling strategies to manage data blocks. However, wear-leveling strategies certainly inevitably degrade write performance. In addition to low write performance, wear-leveling strategies make one block unwritable when one bit of this block is invalid. Although data reconstruction techniques have been widely employed in disk arrays, the reconstruction techniques has not been studied in the context of solid-state disks. In this paper, we present a new fine-grained data-reconstruction algorithm for solid-state disks. The algorithm aims to provide a simple yet efficient wear-leveling strategy that improves both I/O performance and reliability of solid-state disks. Simulation experiments show that all data blocks have very similar in terms of erasure times. The number of extra erasures incurred by our algorithm is very marginal.
{"title":"A Fine-Grained Data Reconstruction Algorithm for Solid-State Disks","authors":"Peng Wang, D. Hu, C. Xie, Jianzong Wang, X. Qin","doi":"10.1109/NAS.2010.62","DOIUrl":"https://doi.org/10.1109/NAS.2010.62","url":null,"abstract":"Solid-state disks (SSDs) with high I/O performance are increasingly becoming popular. To extend the life time of flash memory, one can apply wear-leveling strategies to manage data blocks. However, wear-leveling strategies certainly inevitably degrade write performance. In addition to low write performance, wear-leveling strategies make one block unwritable when one bit of this block is invalid. Although data reconstruction techniques have been widely employed in disk arrays, the reconstruction techniques has not been studied in the context of solid-state disks. In this paper, we present a new fine-grained data-reconstruction algorithm for solid-state disks. The algorithm aims to provide a simple yet efficient wear-leveling strategy that improves both I/O performance and reliability of solid-state disks. Simulation experiments show that all data blocks have very similar in terms of erasure times. The number of extra erasures incurred by our algorithm is very marginal.","PeriodicalId":284549,"journal":{"name":"2010 IEEE Fifth International Conference on Networking, Architecture, and Storage","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-07-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129887768","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Data replication has been widely used as a mean of increasing the data availability of large-scale storage systems where failures are normal. Aiming to provide cost-effective availability, and improve performance and load-balancing of large-scale storage cluster, this paper presents a dynamic replication management scheme referred to as DRM. A model is developed to express availability as function of replica number. Based on this model, minimal replica number to satisfy availability requirement can be determined. DRM further places these replicas among Object-Based Storage Devices (OSD) in a balance way, taking into account different capacity and blocking probability of each OSD in heterogeneous environment. Proposed DRM can dynamically redistribute workloads among OSD cluster by adjusting replica number and location according to workload changing and OSD capacity. Our experiment results conclusively demonstrate that DRM is reliable and can achieve a significant average response time, and load balancing for large-scale OSD cluster.
{"title":"Dynamic Replication Management for Object-Based Storage System","authors":"Q. Wei, B. Veeravalli, Zhixiang Li","doi":"10.1109/NAS.2010.24","DOIUrl":"https://doi.org/10.1109/NAS.2010.24","url":null,"abstract":"Data replication has been widely used as a mean of increasing the data availability of large-scale storage systems where failures are normal. Aiming to provide cost-effective availability, and improve performance and load-balancing of large-scale storage cluster, this paper presents a dynamic replication management scheme referred to as DRM. A model is developed to express availability as function of replica number. Based on this model, minimal replica number to satisfy availability requirement can be determined. DRM further places these replicas among Object-Based Storage Devices (OSD) in a balance way, taking into account different capacity and blocking probability of each OSD in heterogeneous environment. Proposed DRM can dynamically redistribute workloads among OSD cluster by adjusting replica number and location according to workload changing and OSD capacity. Our experiment results conclusively demonstrate that DRM is reliable and can achieve a significant average response time, and load balancing for large-scale OSD cluster.","PeriodicalId":284549,"journal":{"name":"2010 IEEE Fifth International Conference on Networking, Architecture, and Storage","volume":"37 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-07-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123197808","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We investigate the impact of Irrecoverable Read Errors (IREs) on Mean Time To Data Loss (MTTDL) of declustered-parity RAID 6 systems. By extending the analytic model to study the reliability of RAID 5 systems from Wu et. al. we obtain the MTTDL which mainly takes into account two types of data loss: data loss caused by three independent disk failures, and data loss due to a detected IRE during the rebuild after two disks failed. Furthermore we improve the analysis by also considering disk scrubbing to reduce the probability of IREs via periodically reading the data stored on a disk. The results of our numerical analysis show that IREs have a large effect on the MTTDL. The countermeasure is to increase the disk scrubbing rate. As an example, the MTTDL of a system where each disk is scrubbed everyday increases by a factor of at least 27 compared to that of a system with a scrubbing rate of once a year. In addition, declustered-parity RAID 6 system improves the reliability of standard non-declustered RAID 6 systems. For example, a declustered-parity RAID 6 system without disk scrubbing improves the MTTDLs by a factor at least 150 compared to that of a standard system where each disk is scrubbed everyday.
{"title":"Reliability Analysis of Declustered-Parity RAID 6 with Disk Scrubbing and Considering Irrecoverable Read Errors","authors":"Yan Gao, Dirk Meister, A. Brinkmann","doi":"10.1109/NAS.2010.11","DOIUrl":"https://doi.org/10.1109/NAS.2010.11","url":null,"abstract":"We investigate the impact of Irrecoverable Read Errors (IREs) on Mean Time To Data Loss (MTTDL) of declustered-parity RAID 6 systems. By extending the analytic model to study the reliability of RAID 5 systems from Wu et. al. we obtain the MTTDL which mainly takes into account two types of data loss: data loss caused by three independent disk failures, and data loss due to a detected IRE during the rebuild after two disks failed. Furthermore we improve the analysis by also considering disk scrubbing to reduce the probability of IREs via periodically reading the data stored on a disk. The results of our numerical analysis show that IREs have a large effect on the MTTDL. The countermeasure is to increase the disk scrubbing rate. As an example, the MTTDL of a system where each disk is scrubbed everyday increases by a factor of at least 27 compared to that of a system with a scrubbing rate of once a year. In addition, declustered-parity RAID 6 system improves the reliability of standard non-declustered RAID 6 systems. For example, a declustered-parity RAID 6 system without disk scrubbing improves the MTTDLs by a factor at least 150 compared to that of a standard system where each disk is scrubbed everyday.","PeriodicalId":284549,"journal":{"name":"2010 IEEE Fifth International Conference on Networking, Architecture, and Storage","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-07-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131024258","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Junfeng Wu, Honglong Chen, W. Lou, Zhibo Wang, Zhi Wang
Node localization becomes an important issue in the wireless sensor network as its broad applications in environment monitoring, emergency rescue and battlefield surveillance, etc. Basically, the DV-Hop localization mechanism can work well with the assistance of beacon nodes that have the capability of self-positioning. However, if the network is invaded by a wormhole attack, the attacker can tunnel the packets via the wormhole link to cause severe impacts on the DV-Hop localization process. The distance-vector propagation phase during the DV-Hop localization even aggravates the positioning result, compared to the localization schemes without wormhole attacks. In this paper, we analyze the impacts of wormhole attack on DV-Hop localization scheme. Based on the basic DV-Hop localization process, we propose a label-based secure localization scheme to defend against the wormhole attack. Simulation results demonstrate that our proposed secure localization scheme is capable of detecting the wormhole attack and resisting its adverse impacts with a high probability.
{"title":"Label-Based DV-Hop Localization Against Wormhole Attacks in Wireless Sensor Networks","authors":"Junfeng Wu, Honglong Chen, W. Lou, Zhibo Wang, Zhi Wang","doi":"10.1109/NAS.2010.41","DOIUrl":"https://doi.org/10.1109/NAS.2010.41","url":null,"abstract":"Node localization becomes an important issue in the wireless sensor network as its broad applications in environment monitoring, emergency rescue and battlefield surveillance, etc. Basically, the DV-Hop localization mechanism can work well with the assistance of beacon nodes that have the capability of self-positioning. However, if the network is invaded by a wormhole attack, the attacker can tunnel the packets via the wormhole link to cause severe impacts on the DV-Hop localization process. The distance-vector propagation phase during the DV-Hop localization even aggravates the positioning result, compared to the localization schemes without wormhole attacks. In this paper, we analyze the impacts of wormhole attack on DV-Hop localization scheme. Based on the basic DV-Hop localization process, we propose a label-based secure localization scheme to defend against the wormhole attack. Simulation results demonstrate that our proposed secure localization scheme is capable of detecting the wormhole attack and resisting its adverse impacts with a high probability.","PeriodicalId":284549,"journal":{"name":"2010 IEEE Fifth International Conference on Networking, Architecture, and Storage","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-07-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127060396","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Traffic classification is important to many network applications, such as network monitoring. The classic way to identify flows, e.g., examining the port numbers in the packet headers, becomes ineffective. In this context, deep packet inspection technology, which does not only inspect the packet headers but also the packet payloads, plays a more important role in traffic classification. Meanwhile regular expressions are replacing strings to represent patterns because of their expressive power, simplicity and flexibility. However, regular expressions mathcing technique causes a high memory usage and processing cost, which result in low throughout. In this paper, we analyze the application-level protocol distribution of network traffic and conclude its characteristic. Furthermore, we design a fast and memory-efficient system of a two-layer architecture for traffic classification with the help of regular expressions in multi-core architecture, which is different from previous one-layer architecture. In order to reduce the memory usage of DFA, we use a compression algorithm called CSCA to perform regular expressions matching, which can reduce 95% memory usage of DFA. We also introduce some optimizations to accelerate the matching speed. We use real-world traffic and all L7-filter protocol patterns to make our experiments, and the results show that the system achieves at Gbps level throughout in 4-cores Servers.
{"title":"Fast and Memory-Efficient Traffic Classification with Deep Packet Inspection in CMP Architecture","authors":"Tingwen Liu, Yong Sun, Li Guo","doi":"10.1109/NAS.2010.43","DOIUrl":"https://doi.org/10.1109/NAS.2010.43","url":null,"abstract":"Traffic classification is important to many network applications, such as network monitoring. The classic way to identify flows, e.g., examining the port numbers in the packet headers, becomes ineffective. In this context, deep packet inspection technology, which does not only inspect the packet headers but also the packet payloads, plays a more important role in traffic classification. Meanwhile regular expressions are replacing strings to represent patterns because of their expressive power, simplicity and flexibility. However, regular expressions mathcing technique causes a high memory usage and processing cost, which result in low throughout. In this paper, we analyze the application-level protocol distribution of network traffic and conclude its characteristic. Furthermore, we design a fast and memory-efficient system of a two-layer architecture for traffic classification with the help of regular expressions in multi-core architecture, which is different from previous one-layer architecture. In order to reduce the memory usage of DFA, we use a compression algorithm called CSCA to perform regular expressions matching, which can reduce 95% memory usage of DFA. We also introduce some optimizations to accelerate the matching speed. We use real-world traffic and all L7-filter protocol patterns to make our experiments, and the results show that the system achieves at Gbps level throughout in 4-cores Servers.","PeriodicalId":284549,"journal":{"name":"2010 IEEE Fifth International Conference on Networking, Architecture, and Storage","volume":"243 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-07-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115000520","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Apaporn Boonyarattaphan, Yan Bai, S. Chung, R. Poovendran
The transformation of healthcare from human-based to online services can expose e-health to the security threats as other online applications. The identities of legitimate e-health users need to be verified cautiously before the access privileges are granted. Since each treatment service of a patient occurs within a time interval and specific location, we propose to make use of time as well as the location as additional parameters in verifying that legitimate users are involved in services. In particular, we develop and implement a prototype of the Spatial-Temporal Access Control to authenticate and authorize users of e-health services, termed STAC-eHS. STAC-eHS is beneficial for e-health services since it allows system users to define the spatial and/or temporal constraints for e-health authentication and authorization decisions, thus, improving e-health system security and protection of patient’s privacy. We also perform experiments to evaluate STAC-eHS. The results show that STAC-eHS increases the accuracy of the detection of illegitimate users in an e-health system by about 3-12%, as compared to traditional RBAC, with a small delay of less than two seconds.
{"title":"Spatial-Temporal Access Control for E-health Services","authors":"Apaporn Boonyarattaphan, Yan Bai, S. Chung, R. Poovendran","doi":"10.1109/NAS.2010.38","DOIUrl":"https://doi.org/10.1109/NAS.2010.38","url":null,"abstract":"The transformation of healthcare from human-based to online services can expose e-health to the security threats as other online applications. The identities of legitimate e-health users need to be verified cautiously before the access privileges are granted. Since each treatment service of a patient occurs within a time interval and specific location, we propose to make use of time as well as the location as additional parameters in verifying that legitimate users are involved in services. In particular, we develop and implement a prototype of the Spatial-Temporal Access Control to authenticate and authorize users of e-health services, termed STAC-eHS. STAC-eHS is beneficial for e-health services since it allows system users to define the spatial and/or temporal constraints for e-health authentication and authorization decisions, thus, improving e-health system security and protection of patient’s privacy. We also perform experiments to evaluate STAC-eHS. The results show that STAC-eHS increases the accuracy of the detection of illegitimate users in an e-health system by about 3-12%, as compared to traditional RBAC, with a small delay of less than two seconds.","PeriodicalId":284549,"journal":{"name":"2010 IEEE Fifth International Conference on Networking, Architecture, and Storage","volume":"70 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-07-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127195942","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Group mobility is quite usual in many realistic mobile and wireless environments, but it is rarely adopted in multipath routing. We propose a Group mobility-based Multipath Routing protocol (GMR) for large and dense mobile ad-hoc networks (MANETs). The GMR protocol adapts intra-group routing and inter-group routing to handle group mobility. The routing table maintained by a group leader is used to discover routes in intra-group routing, while the reactive routing, with the zoning method, is used to discover multiple node-disjoint paths in inter-group routing. The purpose of the zoning method is to ensure that a path is mapped to a separate zone, so that nodes are disjointed in multiple paths. Performance analysis and simulation results show that the proposed protocol provides satisfactory routing performance in large and dense networks with group mobility patterns.
{"title":"Node-Disjoint Multipath Routing with Group Mobility in MANETs","authors":"Yun Ge, Guojun Wang, Jie Wu","doi":"10.1109/NAS.2010.26","DOIUrl":"https://doi.org/10.1109/NAS.2010.26","url":null,"abstract":"Group mobility is quite usual in many realistic mobile and wireless environments, but it is rarely adopted in multipath routing. We propose a Group mobility-based Multipath Routing protocol (GMR) for large and dense mobile ad-hoc networks (MANETs). The GMR protocol adapts intra-group routing and inter-group routing to handle group mobility. The routing table maintained by a group leader is used to discover routes in intra-group routing, while the reactive routing, with the zoning method, is used to discover multiple node-disjoint paths in inter-group routing. The purpose of the zoning method is to ensure that a path is mapped to a separate zone, so that nodes are disjointed in multiple paths. Performance analysis and simulation results show that the proposed protocol provides satisfactory routing performance in large and dense networks with group mobility patterns.","PeriodicalId":284549,"journal":{"name":"2010 IEEE Fifth International Conference on Networking, Architecture, and Storage","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-07-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133569466","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}