A b-coloring of a graph G is a proper coloring of the nodes of G such that each color class contains a node that has a neighbor in all other color classes. A fully dynamic algorithm is an algorithm used to support modifications (insertion or deletion) of nodes and edges in a network. Thus, in this paper we propose a fully dynamic distributed algorithm to maintain a b-coloring of a graph when its topology evolves. This method determines a b-coloring in time O(¿2) and needs O(n¿) changes of colors to maintain a b-coloring of a graph, where n is the number of nodes and ¿ is the maximum degree in the graph.
{"title":"A Fully Dynamic Distributed Algorithm for a B-Coloring of Graphs","authors":"Shuang Liu, Brice Effantin, H. Kheddouci","doi":"10.1109/ISPA.2008.47","DOIUrl":"https://doi.org/10.1109/ISPA.2008.47","url":null,"abstract":"A b-coloring of a graph G is a proper coloring of the nodes of G such that each color class contains a node that has a neighbor in all other color classes. A fully dynamic algorithm is an algorithm used to support modifications (insertion or deletion) of nodes and edges in a network. Thus, in this paper we propose a fully dynamic distributed algorithm to maintain a b-coloring of a graph when its topology evolves. This method determines a b-coloring in time O(¿2) and needs O(n¿) changes of colors to maintain a b-coloring of a graph, where n is the number of nodes and ¿ is the maximum degree in the graph.","PeriodicalId":345341,"journal":{"name":"2008 IEEE International Symposium on Parallel and Distributed Processing with Applications","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-12-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114055268","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Handoff latency is a severe bottleneck impacting the service continuity for voice and multimedia applications in WLAN. IEEE 802.11k neighbor report defines the neighbor APs which are potential transition candidates for the roaming target. But the selection method for the roaming target AP is left undefined. Several schemes have been proposed for fast handoff with neighbor APpsilas information. However, these schemes result in huge redundant transition messages overheads in the WLAN and require high computing power for the AAA (Authentication, Authorization, and Access control) server. In this paper, we propose an adaptive neighbor caching (ANC) method to achieve higher handoff prediction accuracy for selecting proper candidate APs in the Neighbor Report. An adaptive predictability index is introduced for selecting those potential roaming APs, which can mitigate the scanning latency and the pre-authentication key distribution message overhead in the WLAN as well as computing loading for the AAA server. Simulation results present up to 83.5% of transition messages are reduced in comparison to the Proactive Neighbor Caching (PNC), 56.4% of candidate AP selection accuracy is improved and 37.5% of transition messages are reduced in comparison to the Selective Neighbor Caching (SNC).
切换延迟是影响无线局域网语音和多媒体业务连续性的严重瓶颈。IEEE 802.11k邻居报告定义了漫游目标的潜在过渡候选邻居ap。但是漫游目标AP的选择方法没有定义。提出了几种与邻居APpsilas信息快速切换的方案。然而,这些方案会导致WLAN中的大量冗余转换消息开销,并且需要AAA (Authentication, Authorization, and Access control)服务器的高计算能力。为了在邻居报告中选择合适的候选ap,我们提出了一种自适应邻居缓存(ANC)方法来实现更高的切换预测精度。引入自适应的可预测性指标来选择这些潜在的漫游ap,可以减少扫描延迟和WLAN中预认证密钥分发消息开销以及AAA服务器的计算负载。仿真结果表明,与主动邻居缓存(PNC)相比,转换消息减少了83.5%,候选AP选择精度提高了56.4%,与选择性邻居缓存(SNC)相比,转换消息减少了37.5%。
{"title":"Adaptive Neighbor Caching for Fast BSS Transition Using IEEE 802.11k Neighbor Report","authors":"Ching-Hwa Yu, Michael Pan, Sheng-de Wang","doi":"10.1109/ISPA.2008.78","DOIUrl":"https://doi.org/10.1109/ISPA.2008.78","url":null,"abstract":"Handoff latency is a severe bottleneck impacting the service continuity for voice and multimedia applications in WLAN. IEEE 802.11k neighbor report defines the neighbor APs which are potential transition candidates for the roaming target. But the selection method for the roaming target AP is left undefined. Several schemes have been proposed for fast handoff with neighbor APpsilas information. However, these schemes result in huge redundant transition messages overheads in the WLAN and require high computing power for the AAA (Authentication, Authorization, and Access control) server. In this paper, we propose an adaptive neighbor caching (ANC) method to achieve higher handoff prediction accuracy for selecting proper candidate APs in the Neighbor Report. An adaptive predictability index is introduced for selecting those potential roaming APs, which can mitigate the scanning latency and the pre-authentication key distribution message overhead in the WLAN as well as computing loading for the AAA server. Simulation results present up to 83.5% of transition messages are reduced in comparison to the Proactive Neighbor Caching (PNC), 56.4% of candidate AP selection accuracy is improved and 37.5% of transition messages are reduced in comparison to the Selective Neighbor Caching (SNC).","PeriodicalId":345341,"journal":{"name":"2008 IEEE International Symposium on Parallel and Distributed Processing with Applications","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-12-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122904375","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper proposes a fast method for computing the costs of all-pairs shortest paths (APSPs) on the graphics processing unit (GPU). The proposed method is implemented using compute unified device architecture (CUDA), which offers us a development environment for performing general-purpose computation on the GPU. Our method is based on Harish's iterative algorithm that computes the cost of the single-source shortest path (SSSP) for every source vertex. We present that exploiting task parallelism in the APSP problem allows us to efficiently use on-chip memory in the GPU, reducing the amount of data being transferred from relatively slower off-chip memory. Furthermore, our task parallel scheme is useful to exploit a higher parallelism, increasing the efficiency with highly threaded code. As a result, our method is 3.4--15 times faster than the prior method. Using on-chip memory, our method eliminates approximately 20% of data loads from off-chip memory.
{"title":"A Task Parallel Algorithm for Computing the Costs of All-Pairs Shortest Paths on the CUDA-Compatible GPU","authors":"T. Okuyama, Fumihiko Ino, K. Hagihara","doi":"10.1109/ISPA.2008.40","DOIUrl":"https://doi.org/10.1109/ISPA.2008.40","url":null,"abstract":"This paper proposes a fast method for computing the costs of all-pairs shortest paths (APSPs) on the graphics processing unit (GPU). The proposed method is implemented using compute unified device architecture (CUDA), which offers us a development environment for performing general-purpose computation on the GPU. Our method is based on Harish's iterative algorithm that computes the cost of the single-source shortest path (SSSP) for every source vertex. We present that exploiting task parallelism in the APSP problem allows us to efficiently use on-chip memory in the GPU, reducing the amount of data being transferred from relatively slower off-chip memory. Furthermore, our task parallel scheme is useful to exploit a higher parallelism, increasing the efficiency with highly threaded code. As a result, our method is 3.4--15 times faster than the prior method. Using on-chip memory, our method eliminates approximately 20% of data loads from off-chip memory.","PeriodicalId":345341,"journal":{"name":"2008 IEEE International Symposium on Parallel and Distributed Processing with Applications","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-12-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131278009","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Parallel transactional systems are systems that execute transaction tasks in multiple servers, with load-balancing. We consider time constraints on those tasks and the fact that the tasks run in milliseconds means that fast admission control should be used. We propose such an admission control approach. Although there are scheduler designs building and maintaining complete deadline feasible schedules for jobs, with typical costs of O(n2) on the number of jobs, we are looking for simple and fast admission test add-on solutions to LWR that can be used with tasks running in milliseconds (e.g. transactions) as well as with lengthier jobs. We give the strictest possible bounds that guarantee deadlines, the deadline-feasibility algorithm with complexity O(n/number of servers), a system model and the approach itself. We show, by means of simulation comparison to typical alternatives, that the proposed approach is a good solution.
{"title":"Analyzing QoS Approach for Parallel Soft Real-Time","authors":"P. Furtado","doi":"10.1109/ISPA.2008.85","DOIUrl":"https://doi.org/10.1109/ISPA.2008.85","url":null,"abstract":"Parallel transactional systems are systems that execute transaction tasks in multiple servers, with load-balancing. We consider time constraints on those tasks and the fact that the tasks run in milliseconds means that fast admission control should be used. We propose such an admission control approach. Although there are scheduler designs building and maintaining complete deadline feasible schedules for jobs, with typical costs of O(n2) on the number of jobs, we are looking for simple and fast admission test add-on solutions to LWR that can be used with tasks running in milliseconds (e.g. transactions) as well as with lengthier jobs. We give the strictest possible bounds that guarantee deadlines, the deadline-feasibility algorithm with complexity O(n/number of servers), a system model and the approach itself. We show, by means of simulation comparison to typical alternatives, that the proposed approach is a good solution.","PeriodicalId":345341,"journal":{"name":"2008 IEEE International Symposium on Parallel and Distributed Processing with Applications","volume":"55 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-12-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133505017","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The emergence of pervasive computing in our everyday life supposes that the data necessary to the operation of the majority of our essential services in various fields of life will be managed by these systems. Thus, their dependability became a major concern. But, dependability issues have not been well explored so far in pervasive computing research. Pervasive environments are highly complex, heterogeneous and geographically dispersed. As a result, current means and facets of dependability do not address the needs of these systems. A solution to achieve this goal should be to adopt a dependability approach based on survivability in pervasive environments. But, the survivability suffers from a remarkable lack of suitable and mature methods for using it in practice. In this paper, we focus on achieving survivability in pervasive environments. First, we introduce a formal survivability model based on a rigorous definition of the concept of acceptable service and a method for calculating the degree of survivability of the system. Then, we present the basis for a new approach to adapt the system in adverse operation environment to comply with its survivability specification. To fix ideas, a case study in pervasive healthcare is presented.
{"title":"A Formal Specification Model of Survivability for Pervasive Systems","authors":"A. Ayara, F. Najjar","doi":"10.1109/ISPA.2008.62","DOIUrl":"https://doi.org/10.1109/ISPA.2008.62","url":null,"abstract":"The emergence of pervasive computing in our everyday life supposes that the data necessary to the operation of the majority of our essential services in various fields of life will be managed by these systems. Thus, their dependability became a major concern. But, dependability issues have not been well explored so far in pervasive computing research. Pervasive environments are highly complex, heterogeneous and geographically dispersed. As a result, current means and facets of dependability do not address the needs of these systems. A solution to achieve this goal should be to adopt a dependability approach based on survivability in pervasive environments. But, the survivability suffers from a remarkable lack of suitable and mature methods for using it in practice. In this paper, we focus on achieving survivability in pervasive environments. First, we introduce a formal survivability model based on a rigorous definition of the concept of acceptable service and a method for calculating the degree of survivability of the system. Then, we present the basis for a new approach to adapt the system in adverse operation environment to comply with its survivability specification. To fix ideas, a case study in pervasive healthcare is presented.","PeriodicalId":345341,"journal":{"name":"2008 IEEE International Symposium on Parallel and Distributed Processing with Applications","volume":"44 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-12-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134346343","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Emerging body-wearable devices for continuous health monitoring are severely energy constrained and yet required to offer high communication reliability under fluctuating channel conditions. Such devices require very careful management of their energy resources in order to prolong their lifetime. In our earlier work we had proposed dynamic power control as a means of saving precious energy in off the-shelf sensor devices. In this work we experiment with a real body-wearable device to assess the power savings possible in a realistic setting. We quantify the power consumption against the packet loss and establish the feasibility of dynamic power control for saving energy in a truly-body-wearable setting.
{"title":"Experiments in Adaptive Power Control for Truly Wearable Biomedical Sensor Devices","authors":"Ashay Dhamdhere, V. Sivaraman, A. Burdett","doi":"10.1109/ISPA.2008.96","DOIUrl":"https://doi.org/10.1109/ISPA.2008.96","url":null,"abstract":"Emerging body-wearable devices for continuous health monitoring are severely energy constrained and yet required to offer high communication reliability under fluctuating channel conditions. Such devices require very careful management of their energy resources in order to prolong their lifetime. In our earlier work we had proposed dynamic power control as a means of saving precious energy in off the-shelf sensor devices. In this work we experiment with a real body-wearable device to assess the power savings possible in a realistic setting. We quantify the power consumption against the packet loss and establish the feasibility of dynamic power control for saving energy in a truly-body-wearable setting.","PeriodicalId":345341,"journal":{"name":"2008 IEEE International Symposium on Parallel and Distributed Processing with Applications","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-12-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132231008","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Power management for WSNs can take many forms, from adaptively tuning the power consumption of some of the components of a node to hibernating it completely. In the later case, the competence of the WSN must not be compromised. In general, the competence of a WSN is its ability to perform its function in an accurate and timely fashion. These two, related, Quality of Service (QoS) metrics are primarily affected by the density and latency of data from the environment, respectively. Without adequate density, interesting events may not be adequately observed or missed completely by the application, while stale data could result in event detection occurring too late. In opposition to this is the fact that the energy consumed by the network is related to the number of active nodes in the deployment. Therefore, given that the nodes have finite power resources, a trade-off exists between the longevity and QoS provided by the network and it is crucial that both aspects are considered when evaluating a power management protocol. In this paper, we present an evaluation of a novel node hibernation technique based on interpolated sensor readings according to these four metrics: energy consumption, density, message latency and the accuracy of an application utilising the data from the WSN. A comparison with a standard WSN that does not engage in power management is also presented, in order to show the overhead in the protocols operation.
{"title":"Evaluating Interpolation-Based Power Management","authors":"R. Tynan, G. O’hare","doi":"10.1109/ISPA.2008.71","DOIUrl":"https://doi.org/10.1109/ISPA.2008.71","url":null,"abstract":"Power management for WSNs can take many forms, from adaptively tuning the power consumption of some of the components of a node to hibernating it completely. In the later case, the competence of the WSN must not be compromised. In general, the competence of a WSN is its ability to perform its function in an accurate and timely fashion. These two, related, Quality of Service (QoS) metrics are primarily affected by the density and latency of data from the environment, respectively. Without adequate density, interesting events may not be adequately observed or missed completely by the application, while stale data could result in event detection occurring too late. In opposition to this is the fact that the energy consumed by the network is related to the number of active nodes in the deployment. Therefore, given that the nodes have finite power resources, a trade-off exists between the longevity and QoS provided by the network and it is crucial that both aspects are considered when evaluating a power management protocol. In this paper, we present an evaluation of a novel node hibernation technique based on interpolated sensor readings according to these four metrics: energy consumption, density, message latency and the accuracy of an application utilising the data from the WSN. A comparison with a standard WSN that does not engage in power management is also presented, in order to show the overhead in the protocols operation.","PeriodicalId":345341,"journal":{"name":"2008 IEEE International Symposium on Parallel and Distributed Processing with Applications","volume":"43 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-12-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133497733","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yingwen Song, H. Takemiya, Yoshio Tanaka, H. Nakada, S. Sekiguchi
To ensure large-scale and long-time (LSLT) applications to run smoothly in a dynamic and heterogeneous grid environment, we have designed and implemented a WSRF-based framework with which users can reserve resources and request on-demand computing resources. The framework can be architecturely divided into three tiers: the tier providing client-side reservation and allocation APIs, the tier for reservation brokerage and resource allocation, and the tier for backend services. The reservation API is implemented for making and releasing a reservation, as well as for showing available reservations. The allocation API is implemented to request, to check, and to release a resource in a convenient way. The middle tier is designed to hide the complexity of the underlying grid infrastructure, and implemented to provide several allocation algorithms. One of the main backend services is Maui-based reservation service at present. A portal to facilitate the resource management is also available. In this paper, we present the API specification, the architecture, and the implementation of this framework. We also show a detailed experimental example.
{"title":"GRPLib: A Web Service Based Framework Supporting Sustainable Execution of Large-Scale and Long-Time Grid Applications","authors":"Yingwen Song, H. Takemiya, Yoshio Tanaka, H. Nakada, S. Sekiguchi","doi":"10.1109/ISPA.2008.13","DOIUrl":"https://doi.org/10.1109/ISPA.2008.13","url":null,"abstract":"To ensure large-scale and long-time (LSLT) applications to run smoothly in a dynamic and heterogeneous grid environment, we have designed and implemented a WSRF-based framework with which users can reserve resources and request on-demand computing resources. The framework can be architecturely divided into three tiers: the tier providing client-side reservation and allocation APIs, the tier for reservation brokerage and resource allocation, and the tier for backend services. The reservation API is implemented for making and releasing a reservation, as well as for showing available reservations. The allocation API is implemented to request, to check, and to release a resource in a convenient way. The middle tier is designed to hide the complexity of the underlying grid infrastructure, and implemented to provide several allocation algorithms. One of the main backend services is Maui-based reservation service at present. A portal to facilitate the resource management is also available. In this paper, we present the API specification, the architecture, and the implementation of this framework. We also show a detailed experimental example.","PeriodicalId":345341,"journal":{"name":"2008 IEEE International Symposium on Parallel and Distributed Processing with Applications","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-12-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129889287","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Despite recent theory development, methods of calibration that accurately recover signals from biased sensor readings remain limited in their applicability. Acoustic sensors, for instance, which have been popular in low power wireless sensor networks, are difficult to calibrate in this manner, given their significant hardware variability, large dynamic range, sensitivity to battery power level, and complex spatial/temporal environmental variations. In this paper, we submit that the applicability of calibration is broadened by lifting the calibration problem from the level of sensors to that of sensing applications. We show feasibility of adaptive, easy, and accurate calibration at the level of application-specific features, via an example of recovering the feature of acoustic signal-to-noise ratio (SNR) that is useful in event-detection applications. By easy, we mean there is an efficient, purely local, and stimulus-free procedure for recovering SNR (that compares measured variances for multiple randomly chosen sensitivities, effected via acoustic sensor hardware support); unlike extant calibration methods, the procedure does not need to rely on any synchronization among nodes, long-term correlation between their respective environments, or assumptions about training events. And by accurate, we mean the procedure yields low error in SNR estimation. We provide experimental validation of the difficulty of directly calibrating acoustic signals and the accuracy of our SNR calibration procedure.
{"title":"Feature Calibration in Sensor Networks","authors":"H. Cao, A. Arora, Emre Ertin, Kenneth W. Parker","doi":"10.1109/ISPA.2008.52","DOIUrl":"https://doi.org/10.1109/ISPA.2008.52","url":null,"abstract":"Despite recent theory development, methods of calibration that accurately recover signals from biased sensor readings remain limited in their applicability. Acoustic sensors, for instance, which have been popular in low power wireless sensor networks, are difficult to calibrate in this manner, given their significant hardware variability, large dynamic range, sensitivity to battery power level, and complex spatial/temporal environmental variations. In this paper, we submit that the applicability of calibration is broadened by lifting the calibration problem from the level of sensors to that of sensing applications. We show feasibility of adaptive, easy, and accurate calibration at the level of application-specific features, via an example of recovering the feature of acoustic signal-to-noise ratio (SNR) that is useful in event-detection applications. By easy, we mean there is an efficient, purely local, and stimulus-free procedure for recovering SNR (that compares measured variances for multiple randomly chosen sensitivities, effected via acoustic sensor hardware support); unlike extant calibration methods, the procedure does not need to rely on any synchronization among nodes, long-term correlation between their respective environments, or assumptions about training events. And by accurate, we mean the procedure yields low error in SNR estimation. We provide experimental validation of the difficulty of directly calibrating acoustic signals and the accuracy of our SNR calibration procedure.","PeriodicalId":345341,"journal":{"name":"2008 IEEE International Symposium on Parallel and Distributed Processing with Applications","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-12-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130857524","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Node mobility is one of the most important factors that may degrade network performance and restrict network scalability in mobile ad hoc networks. An effective way to reduce the impact of node mobility is to select long lifetime routing paths in the network. We propose a link lifetime-based segment-by-segment routing protocol (LL-SSR) in mobile ad hoc networks, where each node maintains a routing table for its k-hop region. Simulation studies show that LL-SSR has better scalability and higher packet delivery ratio when compared with GPSR.
{"title":"Link Lifetime-Based Segment-by-Segment Routing Protocol in MANETs","authors":"Yujie Chen, Guojun Wang, Sancheng Peng","doi":"10.1109/ISPA.2008.39","DOIUrl":"https://doi.org/10.1109/ISPA.2008.39","url":null,"abstract":"Node mobility is one of the most important factors that may degrade network performance and restrict network scalability in mobile ad hoc networks. An effective way to reduce the impact of node mobility is to select long lifetime routing paths in the network. We propose a link lifetime-based segment-by-segment routing protocol (LL-SSR) in mobile ad hoc networks, where each node maintains a routing table for its k-hop region. Simulation studies show that LL-SSR has better scalability and higher packet delivery ratio when compared with GPSR.","PeriodicalId":345341,"journal":{"name":"2008 IEEE International Symposium on Parallel and Distributed Processing with Applications","volume":"47 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-12-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130618141","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}