Pub Date : 2010-12-01DOI: 10.1109/GLOCOM.2010.5683323
Carlos Queiroz, A. Mahmood, Z. Tari
Supervisory Control and Data Acquisition (SCADA) systems control and monitor industrial and critical infrastructure functions, such as the electricity, gas, water, waste, railway and traffic. Recently, SCADA systems have been targeted by an increasing number of attacks from the Internet due to its grow- ing connectivity to Enterprise networks. Traditional techniques and models of identifying attacks, and quantifying its impact cannot be directly applied to SCADA systems because of their limited resources and real-time operating characteristics. The paper introduces a novel framework for evaluating survivability of SCADA systems from a service-oriented perspective. The framework uses an analytical model to evaluate the status of services performance and the survivability of the overall system using queuing theory and Bayesian networks. We further discuss how to learn from historical or simulated data automatically for building the conditional probability tables and the Bayesian networks.
{"title":"Survivable SCADA Systems: An Analytical Framework Using Performance Modelling","authors":"Carlos Queiroz, A. Mahmood, Z. Tari","doi":"10.1109/GLOCOM.2010.5683323","DOIUrl":"https://doi.org/10.1109/GLOCOM.2010.5683323","url":null,"abstract":"Supervisory Control and Data Acquisition (SCADA) systems control and monitor industrial and critical infrastructure functions, such as the electricity, gas, water, waste, railway and traffic. Recently, SCADA systems have been targeted by an increasing number of attacks from the Internet due to its grow- ing connectivity to Enterprise networks. Traditional techniques and models of identifying attacks, and quantifying its impact cannot be directly applied to SCADA systems because of their limited resources and real-time operating characteristics. The paper introduces a novel framework for evaluating survivability of SCADA systems from a service-oriented perspective. The framework uses an analytical model to evaluate the status of services performance and the survivability of the overall system using queuing theory and Bayesian networks. We further discuss how to learn from historical or simulated data automatically for building the conditional probability tables and the Bayesian networks.","PeriodicalId":6448,"journal":{"name":"2010 IEEE Global Telecommunications Conference GLOBECOM 2010","volume":"72 1","pages":"1-5"},"PeriodicalIF":0.0,"publicationDate":"2010-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89247995","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2010-12-01DOI: 10.1109/GLOCOM.2010.5683699
Khursheed Hassan, T. Rappaport, J. Andrews
Multi-gigabit per second wireless network devices are emerging for personal area networks (PAN) in the 60 GHz band. Such devices are typically power hungry, largely due to the requisite high speed analog to digital converters (ADCs) that can consume from tens to hundreds of milliwatts of power. This paper studies the use of analog equalization before the ADC to reduce the required ADC resolution. We provide a novel analysis that uses a superposition model for multipath energy and derive a closed-form expression that relates ADC resolution to the channel state, and also the bit error rate (BER) for MQAM constellations. Simulations verify that analog equalization can reduce the link bit-error rate by up to several orders of magnitude, without increasing the number of quantization bits in the ADC.
{"title":"Analog Equalization for Low Power 60 GHz Receivers in Realistic Multipath Channels","authors":"Khursheed Hassan, T. Rappaport, J. Andrews","doi":"10.1109/GLOCOM.2010.5683699","DOIUrl":"https://doi.org/10.1109/GLOCOM.2010.5683699","url":null,"abstract":"Multi-gigabit per second wireless network devices are emerging for personal area networks (PAN) in the 60 GHz band. Such devices are typically power hungry, largely due to the requisite high speed analog to digital converters (ADCs) that can consume from tens to hundreds of milliwatts of power. This paper studies the use of analog equalization before the ADC to reduce the required ADC resolution. We provide a novel analysis that uses a superposition model for multipath energy and derive a closed-form expression that relates ADC resolution to the channel state, and also the bit error rate (BER) for MQAM constellations. Simulations verify that analog equalization can reduce the link bit-error rate by up to several orders of magnitude, without increasing the number of quantization bits in the ADC.","PeriodicalId":6448,"journal":{"name":"2010 IEEE Global Telecommunications Conference GLOBECOM 2010","volume":"1 1","pages":"1-5"},"PeriodicalIF":0.0,"publicationDate":"2010-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89467692","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2010-12-01DOI: 10.1109/GLOCOM.2010.5683158
Yaoqing Liu, Xin Zhao, Kyuhan Nam, Lan Wang, Beichuan Zhang
The global routing table size has been increasing rapidly, outpacing the upgrade cycle of router hardware. Recently aggregating the Forwarding Information Base (FIB) emerges as a promising solution since it reduces FIB size significantly in the short term and it is compatible with any long-term architectural solutions.Because FIB entries change dynamically with routing updates, an important component of any FIB aggregation scheme is to handle routing updates efficiently while shrinking FIB size as much as possible. In this paper, we first propose two incremental FIB aggregation algorithms based on the ORTC scheme. We then quantify the tradeoffs of the proposed algorithms, which will help operators choose the algorithms best suited for their networks.
{"title":"Incremental Forwarding Table Aggregation","authors":"Yaoqing Liu, Xin Zhao, Kyuhan Nam, Lan Wang, Beichuan Zhang","doi":"10.1109/GLOCOM.2010.5683158","DOIUrl":"https://doi.org/10.1109/GLOCOM.2010.5683158","url":null,"abstract":"The global routing table size has been increasing rapidly, outpacing the upgrade cycle of router hardware. Recently aggregating the Forwarding Information Base (FIB) emerges as a promising solution since it reduces FIB size significantly in the short term and it is compatible with any long-term architectural solutions.Because FIB entries change dynamically with routing updates, an important component of any FIB aggregation scheme is to handle routing updates efficiently while shrinking FIB size as much as possible. In this paper, we first propose two incremental FIB aggregation algorithms based on the ORTC scheme. We then quantify the tradeoffs of the proposed algorithms, which will help operators choose the algorithms best suited for their networks.","PeriodicalId":6448,"journal":{"name":"2010 IEEE Global Telecommunications Conference GLOBECOM 2010","volume":"48 1","pages":"1-6"},"PeriodicalIF":0.0,"publicationDate":"2010-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89823009","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2010-12-01DOI: 10.1109/GLOCOM.2010.5683272
M. McGuire, M. Sima
Increasing the symbol rate up to 25% faster than the Nyquist criteria for a digital communications system with QPSK modulation with an AWGN channel does not significantly increase the bit error rate or required transmission bandwidth. This so-called Faster-than-Nyquist (FTN) signalling has not been used in commercially deployed communications systems since the previously proposed implementation schemes required large receiver complexity. This paper introduces a reformulation of FTN signalling in terms of a non-square matrix multiplied by a sample vector of modulated QPSK symbols. It is shown that with this formulation the receiver complexity to detect the transmitted data for an AWGN channel is well within the complexity bounds for standard digital communication systems. This formulation enables an analysis of FTN signalling directly comparing it to standard higher order modulation and data coding techniques.
{"title":"Discrete Time Faster-Than-Nyquist Signalling","authors":"M. McGuire, M. Sima","doi":"10.1109/GLOCOM.2010.5683272","DOIUrl":"https://doi.org/10.1109/GLOCOM.2010.5683272","url":null,"abstract":"Increasing the symbol rate up to 25% faster than the Nyquist criteria for a digital communications system with QPSK modulation with an AWGN channel does not significantly increase the bit error rate or required transmission bandwidth. This so-called Faster-than-Nyquist (FTN) signalling has not been used in commercially deployed communications systems since the previously proposed implementation schemes required large receiver complexity. This paper introduces a reformulation of FTN signalling in terms of a non-square matrix multiplied by a sample vector of modulated QPSK symbols. It is shown that with this formulation the receiver complexity to detect the transmitted data for an AWGN channel is well within the complexity bounds for standard digital communication systems. This formulation enables an analysis of FTN signalling directly comparing it to standard higher order modulation and data coding techniques.","PeriodicalId":6448,"journal":{"name":"2010 IEEE Global Telecommunications Conference GLOBECOM 2010","volume":"65 1","pages":"1-5"},"PeriodicalIF":0.0,"publicationDate":"2010-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89857534","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2010-12-01DOI: 10.1109/GLOCOM.2010.5683099
Dulanjalie C. Dhanapala, A. Jayasumana
Virtual Coordinate System (VCS) based routing schemes for sensor networks characterize each node by a coordinate vector of size M, consisting of distances to each of a set of M anchors. Higher the number of anchors, the higher the coordinate generation cost as well as the communication cost. Identifying an effective set of anchors and encapsulating original VCS's information in a lower dimensional VCS will enhance the energy efficiency. Two main contributions toward this goal are presented. First is a method for evaluating the amount of novel information contained in an ordinate, i.e., in an anchor, on the coordinate space created by the rest of the anchors. This method can be used to identify unnecessary or inefficient anchors as well as good anchor locations, and thus help lower overhead and power consumption in routing. Second, a method for reducing the VCS dimensionality is presented. This Singular Value Decomposition (SVD) based method preserves the routability achieved in original coordinate space but with lower dimensions. Centralized and online realizations of the proposed algorithm are explained. Examples of different topologies with 40 anchors used in performance analysis show that coordinate length can be reduced on average by a factor of 8 without degrading the routability. Use of novelty filtering to select effective anchors prior to SVD based compression results in further improvement in routability.
{"title":"Dimension Reduction of Virtual Coordinate Systems in Wireless Sensor Networks","authors":"Dulanjalie C. Dhanapala, A. Jayasumana","doi":"10.1109/GLOCOM.2010.5683099","DOIUrl":"https://doi.org/10.1109/GLOCOM.2010.5683099","url":null,"abstract":"Virtual Coordinate System (VCS) based routing schemes for sensor networks characterize each node by a coordinate vector of size M, consisting of distances to each of a set of M anchors. Higher the number of anchors, the higher the coordinate generation cost as well as the communication cost. Identifying an effective set of anchors and encapsulating original VCS's information in a lower dimensional VCS will enhance the energy efficiency. Two main contributions toward this goal are presented. First is a method for evaluating the amount of novel information contained in an ordinate, i.e., in an anchor, on the coordinate space created by the rest of the anchors. This method can be used to identify unnecessary or inefficient anchors as well as good anchor locations, and thus help lower overhead and power consumption in routing. Second, a method for reducing the VCS dimensionality is presented. This Singular Value Decomposition (SVD) based method preserves the routability achieved in original coordinate space but with lower dimensions. Centralized and online realizations of the proposed algorithm are explained. Examples of different topologies with 40 anchors used in performance analysis show that coordinate length can be reduced on average by a factor of 8 without degrading the routability. Use of novelty filtering to select effective anchors prior to SVD based compression results in further improvement in routability.","PeriodicalId":6448,"journal":{"name":"2010 IEEE Global Telecommunications Conference GLOBECOM 2010","volume":"74 1","pages":"1-6"},"PeriodicalIF":0.0,"publicationDate":"2010-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89899509","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2010-12-01DOI: 10.1109/GLOCOM.2010.5683345
F. Foroozan, A. Asif
In this paper, the effect of coupling time reversal (TR) to direction of arrival (DOA) estimation is studied through theoretical Cramer-Rao bound (CRB) analysis and numerical simulations. The proposed TR/DOA estimator adds an additional stage and retransmits the time-reversed versions of the observations made during the original forward probing stage. The backscatters of the time reversed probing signals obtained from this second TR stage is used for DOA estimation based on the Capon algorithm. Simulations results and CRBs comparing the performance of the proposed TR/DOA estimator with that of the conventional approach based only on observations from the forward probing stage are presented.
{"title":"Time Reversal Direction of Arrival Estimation with Cramer-Rao Bound Analysis","authors":"F. Foroozan, A. Asif","doi":"10.1109/GLOCOM.2010.5683345","DOIUrl":"https://doi.org/10.1109/GLOCOM.2010.5683345","url":null,"abstract":"In this paper, the effect of coupling time reversal (TR) to direction of arrival (DOA) estimation is studied through theoretical Cramer-Rao bound (CRB) analysis and numerical simulations. The proposed TR/DOA estimator adds an additional stage and retransmits the time-reversed versions of the observations made during the original forward probing stage. The backscatters of the time reversed probing signals obtained from this second TR stage is used for DOA estimation based on the Capon algorithm. Simulations results and CRBs comparing the performance of the proposed TR/DOA estimator with that of the conventional approach based only on observations from the forward probing stage are presented.","PeriodicalId":6448,"journal":{"name":"2010 IEEE Global Telecommunications Conference GLOBECOM 2010","volume":"31 1","pages":"1-5"},"PeriodicalIF":0.0,"publicationDate":"2010-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83660062","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2010-12-01DOI: 10.1109/GLOCOM.2010.5683975
M. Chaudhry, Z. Asad, A. Sprintson
We consider the problem of accessing large data files stored at multiple locations across a content distribution, peer-to-peer, or massive storage network. We assume that the data is stored in either original form, or encoded form at multiple network locations. Clients access the data through simultaneous downloads from several servers across the network. The central problem in this context is to find a set of disjoint paths of minimum total cost that connect the client with a set of servers such that the data stored at the servers is sufficient to decode the required file. We refer to this problem as the Distributed Data Retrieval (DDR) problem. We present an efficient polynomial-time solution for this problem that leverages the matroid intersection algorithm. Our experimental study shows the advantage of our solution over alternative approaches.
{"title":"An Optimal Solution to the Distributed Data Retrieval Problem","authors":"M. Chaudhry, Z. Asad, A. Sprintson","doi":"10.1109/GLOCOM.2010.5683975","DOIUrl":"https://doi.org/10.1109/GLOCOM.2010.5683975","url":null,"abstract":"We consider the problem of accessing large data files stored at multiple locations across a content distribution, peer-to-peer, or massive storage network. We assume that the data is stored in either original form, or encoded form at multiple network locations. Clients access the data through simultaneous downloads from several servers across the network. The central problem in this context is to find a set of disjoint paths of minimum total cost that connect the client with a set of servers such that the data stored at the servers is sufficient to decode the required file. We refer to this problem as the Distributed Data Retrieval (DDR) problem. We present an efficient polynomial-time solution for this problem that leverages the matroid intersection algorithm. Our experimental study shows the advantage of our solution over alternative approaches.","PeriodicalId":6448,"journal":{"name":"2010 IEEE Global Telecommunications Conference GLOBECOM 2010","volume":"32 1","pages":"1-6"},"PeriodicalIF":0.0,"publicationDate":"2010-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87916905","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2010-12-01DOI: 10.1109/GLOCOM.2010.5683835
Mohamed Abouelela, M. El-Darieby
With e-science applications becoming more and more data-intensive, data is generally generated and stored at different locations and can be divided into independent subsets to be analyzed distributed at many compute locations across an optical grid. It is required to achieve an optimal utilization of optical grid resources. This is generally achieved by minimizing application completion time, which is calculated as the sum of times spent for data transmission and analysis. We propose a Genetic Algorithm (GA) based approach that co-schedules computing and networking resources to achieve this objective. The proposed approach defines a schedule of when to transfer what data subsets to which sites at what times in order to minimize data processing time as well as defining the routes to be used for transferring data subsets to minimize data transfer times. Simulation results show the advantages of the proposed approach in both minimizing the maximum application completion time and reducing the overall genetic algorithm execution time.
{"title":"Co-Scheduling Computational and Networking Resources in E-Science Optical Grids","authors":"Mohamed Abouelela, M. El-Darieby","doi":"10.1109/GLOCOM.2010.5683835","DOIUrl":"https://doi.org/10.1109/GLOCOM.2010.5683835","url":null,"abstract":"With e-science applications becoming more and more data-intensive, data is generally generated and stored at different locations and can be divided into independent subsets to be analyzed distributed at many compute locations across an optical grid. It is required to achieve an optimal utilization of optical grid resources. This is generally achieved by minimizing application completion time, which is calculated as the sum of times spent for data transmission and analysis. We propose a Genetic Algorithm (GA) based approach that co-schedules computing and networking resources to achieve this objective. The proposed approach defines a schedule of when to transfer what data subsets to which sites at what times in order to minimize data processing time as well as defining the routes to be used for transferring data subsets to minimize data transfer times. Simulation results show the advantages of the proposed approach in both minimizing the maximum application completion time and reducing the overall genetic algorithm execution time.","PeriodicalId":6448,"journal":{"name":"2010 IEEE Global Telecommunications Conference GLOBECOM 2010","volume":"73 1","pages":"1-5"},"PeriodicalIF":0.0,"publicationDate":"2010-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85818407","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2010-12-01DOI: 10.1109/GLOCOM.2010.5683263
P. Xiao, Zihuai Lin, W. Yin, C. Cowan
A novel iterative detection scheme for MIMO-OFDM systems is proposed in this work. We show that the existing detection schemes are sub-optimum and the iterative process can be optimized by utilizing the non-circular property of the residual interference after interference cancellation. Results show that the proposed iterative scheme outperforms the conventional iterative soft interference cancellation (ISIC) and V-BLAST schemes by about 1.7 and 4.0 dB, respectively, in a 4 X 4 antennas system over exponentially distributed eleven path channels.
{"title":"Suboptimal and Optimal MIMO-OFDM Iterative Detection Schemes","authors":"P. Xiao, Zihuai Lin, W. Yin, C. Cowan","doi":"10.1109/GLOCOM.2010.5683263","DOIUrl":"https://doi.org/10.1109/GLOCOM.2010.5683263","url":null,"abstract":"A novel iterative detection scheme for MIMO-OFDM systems is proposed in this work. We show that the existing detection schemes are sub-optimum and the iterative process can be optimized by utilizing the non-circular property of the residual interference after interference cancellation. Results show that the proposed iterative scheme outperforms the conventional iterative soft interference cancellation (ISIC) and V-BLAST schemes by about 1.7 and 4.0 dB, respectively, in a 4 X 4 antennas system over exponentially distributed eleven path channels.","PeriodicalId":6448,"journal":{"name":"2010 IEEE Global Telecommunications Conference GLOBECOM 2010","volume":"38 1","pages":"1-5"},"PeriodicalIF":0.0,"publicationDate":"2010-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85828293","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2010-12-01DOI: 10.1109/GLOCOM.2010.5683254
Chul-Ho Lee, Do Young Eun
In wireless sensor networks (WSNs), sensor nodes are typically subjected to energy constraints and often prone to topology changes. While emph{duty cycling} has been widely used for energy conservation in WSNs, emph{random walks} have been popular for many delay-tolerant applications in WSNs due to their many inherent desirable properties. In this paper, we consider an opportunistic forwarding under an asynchronous and heterogeneous duty cycling. We first show that its resulting packet trajectory can be interpreted as a continuous-time random walk, and then provide an analytical formula for its end-to-end delay. Since the extremely large end-to-end delay is still undesirable even for most delay-tolerant applications, we develop a emph{distributed} wake-up scheduling algorithm in which each node autonomously adjusts its (heterogeneous) wake-up rate based emph{only} on its own degree information so as to improve the worst-case end-to-end delay. In particular, we prove that our algorithm outperforms pure homogeneous duty cycling, where every node uses the same wake-up rate, in its guaranteed asymptotic upper bound of the worst-case delay for emph{any} graph. In addition, we show that our proposed algorithm brings out more than $35%$ performance improvement on average when compared with pure homogeneous duty cycling, under various settings of random geometric graphs via numerical evaluations and independent simulation results.
{"title":"A Distributed Wake-Up Scheduling for Opportunistic Forwarding in Wireless Sensor Networks","authors":"Chul-Ho Lee, Do Young Eun","doi":"10.1109/GLOCOM.2010.5683254","DOIUrl":"https://doi.org/10.1109/GLOCOM.2010.5683254","url":null,"abstract":"In wireless sensor networks (WSNs), sensor nodes are typically subjected to energy constraints and often prone to topology changes. While emph{duty cycling} has been widely used for energy conservation in WSNs, emph{random walks} have been popular for many delay-tolerant applications in WSNs due to their many inherent desirable properties. In this paper, we consider an opportunistic forwarding under an asynchronous and heterogeneous duty cycling. We first show that its resulting packet trajectory can be interpreted as a continuous-time random walk, and then provide an analytical formula for its end-to-end delay. Since the extremely large end-to-end delay is still undesirable even for most delay-tolerant applications, we develop a emph{distributed} wake-up scheduling algorithm in which each node autonomously adjusts its (heterogeneous) wake-up rate based emph{only} on its own degree information so as to improve the worst-case end-to-end delay. In particular, we prove that our algorithm outperforms pure homogeneous duty cycling, where every node uses the same wake-up rate, in its guaranteed asymptotic upper bound of the worst-case delay for emph{any} graph. In addition, we show that our proposed algorithm brings out more than $35%$ performance improvement on average when compared with pure homogeneous duty cycling, under various settings of random geometric graphs via numerical evaluations and independent simulation results.","PeriodicalId":6448,"journal":{"name":"2010 IEEE Global Telecommunications Conference GLOBECOM 2010","volume":"520 1","pages":"1-5"},"PeriodicalIF":0.0,"publicationDate":"2010-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86669160","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}