Internet-based distributed systems enable globally scattered resources to be collectively pooled and used in a cooperative manner to achieve unprecedented petascale super computing capabilities. Numerous resource discovery approaches have been proposed to help achieve this goal. To report or discover a multi-attribute resource, most approaches use multiple messages with each message for an attribute, leading to high overhead. Anther approach can reduce multi-attribute to one index, but it is not practically effective in an environment with a large number of different resource attributes. Furthermore, few approaches are able to locate resources geographically close to the requesters, which is critical to system performance. This paper presents a P2P-based intelligent resource discovery (PIRD) mechanism that weaves all attributes into a set of indices using locality sensitive hashing, and then maps the indices to a structured P2P. It further incorporates Lempel-Ziv-Welch algorithm to compress attribute information for higher efficiency. In addition, it helps to search resources geographically close to requesters by relying on a hierarchical P2P structure. PIRD significantly reduces overhead and improves the efficiency and effectiveness of resource discovery. Theoretical analysis and simulation results demonstrate the efficiency of PIRD in comparison with other approaches. It dramatically reduces overhead and yields significant improvements on the efficiency of resource discovery.
{"title":"PIRD: P2P-Based Intelligent Resource Discovery in Internet-Based Distributed Systems","authors":"Haiying Shen, Ze Li, Ting Li, Yingwu Zhu","doi":"10.1109/ICDCS.2008.9","DOIUrl":"https://doi.org/10.1109/ICDCS.2008.9","url":null,"abstract":"Internet-based distributed systems enable globally scattered resources to be collectively pooled and used in a cooperative manner to achieve unprecedented petascale super computing capabilities. Numerous resource discovery approaches have been proposed to help achieve this goal. To report or discover a multi-attribute resource, most approaches use multiple messages with each message for an attribute, leading to high overhead. Anther approach can reduce multi-attribute to one index, but it is not practically effective in an environment with a large number of different resource attributes. Furthermore, few approaches are able to locate resources geographically close to the requesters, which is critical to system performance. This paper presents a P2P-based intelligent resource discovery (PIRD) mechanism that weaves all attributes into a set of indices using locality sensitive hashing, and then maps the indices to a structured P2P. It further incorporates Lempel-Ziv-Welch algorithm to compress attribute information for higher efficiency. In addition, it helps to search resources geographically close to requesters by relying on a hierarchical P2P structure. PIRD significantly reduces overhead and improves the efficiency and effectiveness of resource discovery. Theoretical analysis and simulation results demonstrate the efficiency of PIRD in comparison with other approaches. It dramatically reduces overhead and yields significant improvements on the efficiency of resource discovery.","PeriodicalId":240205,"journal":{"name":"2008 The 28th International Conference on Distributed Computing Systems","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-06-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133690893","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Distributed Denial-of-Service (DDoS) attacks have emerged as a popular means of causing mass targeted service disruptions, often for extended periods of time. The relative ease and low costs of launching such attacks, supplemented by the current woeful state of any viable defense mechanism, have made them one of the top threats to the Internet community today. While distributed packet logging and/or packet marking have been explored in the past for DDoS attack traceback/mitigation, we propose to advance the state of the art by using a novel distributed divide-and-conquer approach in designing a new data dissemination architecture that efficiently tracks attack sources. The main focus of our work is to tackle the three disjoint aspects of the problem, namely attack tree construction, attack path frequency detection, and packet to path association, independently and to use succinct recurrence relations to express their individual implementations. We also evaluate the network traffic and storage overhead induced by our proposed deployment on real-life Internet topologies, supporting hundreds of victims each subject to thousands of high-bandwidth flows simultaneously, and conclude that we can truly achieve single packet traceback guarantees with minimal overhead and high efficiency.
{"title":"Distributed Divide-and-Conquer Techniques for Effective DDoS Attack Defenses","authors":"M. Muthuprasanna, G. Manimaran","doi":"10.1109/ICDCS.2008.10","DOIUrl":"https://doi.org/10.1109/ICDCS.2008.10","url":null,"abstract":"Distributed Denial-of-Service (DDoS) attacks have emerged as a popular means of causing mass targeted service disruptions, often for extended periods of time. The relative ease and low costs of launching such attacks, supplemented by the current woeful state of any viable defense mechanism, have made them one of the top threats to the Internet community today. While distributed packet logging and/or packet marking have been explored in the past for DDoS attack traceback/mitigation, we propose to advance the state of the art by using a novel distributed divide-and-conquer approach in designing a new data dissemination architecture that efficiently tracks attack sources. The main focus of our work is to tackle the three disjoint aspects of the problem, namely attack tree construction, attack path frequency detection, and packet to path association, independently and to use succinct recurrence relations to express their individual implementations. We also evaluate the network traffic and storage overhead induced by our proposed deployment on real-life Internet topologies, supporting hundreds of victims each subject to thousands of high-bandwidth flows simultaneously, and conclude that we can truly achieve single packet traceback guarantees with minimal overhead and high efficiency.","PeriodicalId":240205,"journal":{"name":"2008 The 28th International Conference on Distributed Computing Systems","volume":"45 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-06-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134193164","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We give efficient protocols for secure and private k-nearest neighbor (k-NN) search, when the data is distributed between two parties who want to cooperatively compute the answers without revealing to each other their private data. Our protocol for the single-step k-NN search is provably secure and has linear computation and communication complexity. Previous work on this problem had a quadratic complexity, and also leaked information about the parties' inputs. We adapt our techniquesto also solve the general multi-step k-NN search, and describe a specific embodiment of it for the case of sequence data. The protocols and correctness proofs can be extended to suit other privacy-preserving data mining tasks, such as classification and outlier detection.
{"title":"Efficient Privacy-Preserving k-Nearest Neighbor Search","authors":"Yinian Qi, M. Atallah","doi":"10.1109/ICDCS.2008.79","DOIUrl":"https://doi.org/10.1109/ICDCS.2008.79","url":null,"abstract":"We give efficient protocols for secure and private k-nearest neighbor (k-NN) search, when the data is distributed between two parties who want to cooperatively compute the answers without revealing to each other their private data. Our protocol for the single-step k-NN search is provably secure and has linear computation and communication complexity. Previous work on this problem had a quadratic complexity, and also leaked information about the parties' inputs. We adapt our techniquesto also solve the general multi-step k-NN search, and describe a specific embodiment of it for the case of sequence data. The protocols and correctness proofs can be extended to suit other privacy-preserving data mining tasks, such as classification and outlier detection.","PeriodicalId":240205,"journal":{"name":"2008 The 28th International Conference on Distributed Computing Systems","volume":"136 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-06-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131785175","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yingshu Li, Chunyu Ai, Wiwek P. Deshmukh, Yiwei Wu
Wireless sensor networks (WSNs) are employed in many applications in order to collect data. One key challenge is to minimize energy consumption to prolong network lifetime. A scheme of making some nodes asleep and estimating their values according to the other active nodespsila readings has been proved energy-efficient. For the purpose of improving the precision of estimation, we propose two powerful estimation models, data estimation using physical model (DEPM) and data estimation using statistical model (DESM). DEPM estimates the values of sleeping nodes by the physical characteristics of sensed attributes, while DESM estimates the values through the spatial and temporal correlations of the nodes. Experimental results on real sensor networks show that the proposed techniques provide accurate estimations and conserve energy efficiently.
{"title":"Data Estimation in Sensor Networks Using Physical and Statistical Methodologies","authors":"Yingshu Li, Chunyu Ai, Wiwek P. Deshmukh, Yiwei Wu","doi":"10.1109/ICDCS.2008.22","DOIUrl":"https://doi.org/10.1109/ICDCS.2008.22","url":null,"abstract":"Wireless sensor networks (WSNs) are employed in many applications in order to collect data. One key challenge is to minimize energy consumption to prolong network lifetime. A scheme of making some nodes asleep and estimating their values according to the other active nodespsila readings has been proved energy-efficient. For the purpose of improving the precision of estimation, we propose two powerful estimation models, data estimation using physical model (DEPM) and data estimation using statistical model (DESM). DEPM estimates the values of sleeping nodes by the physical characteristics of sensed attributes, while DESM estimates the values through the spatial and temporal correlations of the nodes. Experimental results on real sensor networks show that the proposed techniques provide accurate estimations and conserve energy efficiently.","PeriodicalId":240205,"journal":{"name":"2008 The 28th International Conference on Distributed Computing Systems","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-06-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124536372","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Quorum-based power saving (QPS) protocols have been proposed for ad hoc networks (e.g., IEEE 802.11 ad hoc mode) to increase energy efficiency and prolong the operational time of mobile stations. These protocols assign to each station a cycle pattern that specifies when the station should wake up (to transmit/receive data) and sleep (to save battery power). In all existing QPS protocols, the cycle length is either identical for all stations or is restricted to certain numbers (e.g. squares or primes). These restrictions on cycle length severely limit the practical use of QPS protocols as each individual station may want to select a cycle length that is best suited for its own need (in terms of remaining battery power, tolerable packet delay, and drop ratio). In this paper we propose the notion of hyper quorum system (HQS)-a generalization of QPS that allows for arbitrary cycle lengths. We describe algorithms to generate two different classes of HQS given any set of arbitrary cycle lengths as input. We then present analytical and simulation results that show the benefits of HQS-based power saving protocols over the existing QPS protocols.
{"title":"Fully Adaptive Power Saving Protocols for Ad Hoc Networks Using the Hyper Quorum System","authors":"Shan-Hung Wu, Ming-Syan Chen, Chung-Min Chen","doi":"10.1109/ICDCS.2008.88","DOIUrl":"https://doi.org/10.1109/ICDCS.2008.88","url":null,"abstract":"Quorum-based power saving (QPS) protocols have been proposed for ad hoc networks (e.g., IEEE 802.11 ad hoc mode) to increase energy efficiency and prolong the operational time of mobile stations. These protocols assign to each station a cycle pattern that specifies when the station should wake up (to transmit/receive data) and sleep (to save battery power). In all existing QPS protocols, the cycle length is either identical for all stations or is restricted to certain numbers (e.g. squares or primes). These restrictions on cycle length severely limit the practical use of QPS protocols as each individual station may want to select a cycle length that is best suited for its own need (in terms of remaining battery power, tolerable packet delay, and drop ratio). In this paper we propose the notion of hyper quorum system (HQS)-a generalization of QPS that allows for arbitrary cycle lengths. We describe algorithms to generate two different classes of HQS given any set of arbitrary cycle lengths as input. We then present analytical and simulation results that show the benefits of HQS-based power saving protocols over the existing QPS protocols.","PeriodicalId":240205,"journal":{"name":"2008 The 28th International Conference on Distributed Computing Systems","volume":"43 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-06-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126560852","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Multi-dimensional storage virtualization (MDSV) technology allows multiple virtual disks, each with a distinct combination of capacity, latency and bandwidth requirements, to be multiplexed on a physical disk storage system with performance isolation. This paper presents novel design and implementation techniques that solve the availability guarantee and fairness assurance problems in multi-dimensional storage virtualization. First, we show that a measurement-based admission control algorithm can reduce the effective resource requirement of a virtual disk with availability guarantee by accurately estimating its resource needs without prior knowledge of its input workload characteristics. Moreover, to accurately factor disk access overhead into real-time disk request scheduling algorithm, we propose a virtual disk switching overhead extraction and distribution algorithm that can derive the intrinsic disk access overhead associated with each virtual disk so as to achieve perfect performance isolation. Finally, we develop an adaptive server time leap-forward algorithm to effectively address the short-term unfairness problem of virtual clock-based disk scheduler, the only known proportional-share scheduler that is based on wall-clock time and thus enables disk utilization efficiency optimization while delivering disk QoS guarantees.
{"title":"Availability and Fairness Support for Storage QoS Guarantee","authors":"Gang Peng, T. Chiueh","doi":"10.1109/ICDCS.2008.107","DOIUrl":"https://doi.org/10.1109/ICDCS.2008.107","url":null,"abstract":"Multi-dimensional storage virtualization (MDSV) technology allows multiple virtual disks, each with a distinct combination of capacity, latency and bandwidth requirements, to be multiplexed on a physical disk storage system with performance isolation. This paper presents novel design and implementation techniques that solve the availability guarantee and fairness assurance problems in multi-dimensional storage virtualization. First, we show that a measurement-based admission control algorithm can reduce the effective resource requirement of a virtual disk with availability guarantee by accurately estimating its resource needs without prior knowledge of its input workload characteristics. Moreover, to accurately factor disk access overhead into real-time disk request scheduling algorithm, we propose a virtual disk switching overhead extraction and distribution algorithm that can derive the intrinsic disk access overhead associated with each virtual disk so as to achieve perfect performance isolation. Finally, we develop an adaptive server time leap-forward algorithm to effectively address the short-term unfairness problem of virtual clock-based disk scheduler, the only known proportional-share scheduler that is based on wall-clock time and thus enables disk utilization efficiency optimization while delivering disk QoS guarantees.","PeriodicalId":240205,"journal":{"name":"2008 The 28th International Conference on Distributed Computing Systems","volume":"46 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-06-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127205786","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Many event-driven localization methods have been proposed as low cost, energy efficient solutions for wireless senor networks. In order to eliminate the requirement of accurately controlled events in existing approaches, we present a practical design using totally uncontrolled events for stationary sensor node positioning. The novel idea of this design is to estimate both the event generation parameters and the location of each sensor node by processing node sequences easily obtained from uncontrolled event distribution. To demonstrate the generality of our design, both straight-line scan and circular wave propagation events are addressed in this paper, and we evaluated our approach through theoretical analysis, extensive simulation and a physical test bed implementation with 41 MICAz motes. The evaluation results illustrate that with only randomly generated events, our solution can effectively localize sensor nodes with excellent flexibility while adding no extra cost at the resource constrained sensor node side. In addition, localization using uncontrolled events provides a nice potential option of achieving node positioning through natural ambient events.
{"title":"Sensor Node Localization Using Uncontrolled Events","authors":"Ziguo Zhong, Dan Wang, T. He","doi":"10.1109/ICDCS.2008.44","DOIUrl":"https://doi.org/10.1109/ICDCS.2008.44","url":null,"abstract":"Many event-driven localization methods have been proposed as low cost, energy efficient solutions for wireless senor networks. In order to eliminate the requirement of accurately controlled events in existing approaches, we present a practical design using totally uncontrolled events for stationary sensor node positioning. The novel idea of this design is to estimate both the event generation parameters and the location of each sensor node by processing node sequences easily obtained from uncontrolled event distribution. To demonstrate the generality of our design, both straight-line scan and circular wave propagation events are addressed in this paper, and we evaluated our approach through theoretical analysis, extensive simulation and a physical test bed implementation with 41 MICAz motes. The evaluation results illustrate that with only randomly generated events, our solution can effectively localize sensor nodes with excellent flexibility while adding no extra cost at the resource constrained sensor node side. In addition, localization using uncontrolled events provides a nice potential option of achieving node positioning through natural ambient events.","PeriodicalId":240205,"journal":{"name":"2008 The 28th International Conference on Distributed Computing Systems","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-06-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115254729","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Networked storage incorporates networking technology and storage technology, greatly extending the reach of the storage subsystem. In this paper, we present a novel Quality of Service (QoS) scheduling scheme to satisfy the requirements of different QoS requests for access to the networked storage system. Our key ideas include breaking down the requests into appropriate chunks of smaller sizes and taking the network characteristics into consideration such that 1) each session channel has smoother data access, 2) resource requirements such as buffer usage are reduced, and 3) more urgent requests can preempt a less urgent request. Our experimental results show that our scheme is effective in obtaining these goals.
网络存储结合了网络技术和存储技术,极大地扩展了存储子系统的覆盖范围。本文提出了一种新的QoS (Quality of Service,服务质量)调度方案,以满足访问网络存储系统的不同QoS请求。我们的关键思想包括将请求分解成适当的小块,并考虑到网络特性,以便1)每个会话通道具有更平滑的数据访问,2)减少资源需求(如缓冲区使用),以及3)更紧急的请求可以抢占不太紧急的请求。实验结果表明,该方案能够有效地实现上述目标。
{"title":"QoS Scheduling for Networked Storage System","authors":"Yingping Lu, D. Du, Chuanyi Liu, Xianbo Zhang","doi":"10.1109/ICDCS.2008.95","DOIUrl":"https://doi.org/10.1109/ICDCS.2008.95","url":null,"abstract":"Networked storage incorporates networking technology and storage technology, greatly extending the reach of the storage subsystem. In this paper, we present a novel Quality of Service (QoS) scheduling scheme to satisfy the requirements of different QoS requests for access to the networked storage system. Our key ideas include breaking down the requests into appropriate chunks of smaller sizes and taking the network characteristics into consideration such that 1) each session channel has smoother data access, 2) resource requirements such as buffer usage are reduced, and 3) more urgent requests can preempt a less urgent request. Our experimental results show that our scheme is effective in obtaining these goals.","PeriodicalId":240205,"journal":{"name":"2008 The 28th International Conference on Distributed Computing Systems","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-06-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131192306","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Sajeeva L. Pallemulle, Haraldur D. Thorvaldsson, K. Goldman
Mission-critical services must be replicated to guarantee correctness and high availability in spite of arbitrary (Byzantine) faults. Traditional Byzantine fault tolerance protocols suffer from several major limitations. Some protocols do not support interoperability between replicated services. Other protocols provide poor fault isolation between services leading to cascading failures across organizational and application boundaries. Moreover, traditional protocols are unsuitable for applications with tiered architectures, long-running threads of computation, or asynchronous interaction between services. We present Perpetual, a protocol that supports Byzantine fault-tolerant execution of replicated services while enforcing strict fault isolation. Perpetual enables interaction between replicated services that may invoke and process remote requests asynchronously in long-running threads of computation. We present a modular implementation, an Axis2 Web Services extension, and experimental results that demonstrate only a moderate overhead due to replication.
必须复制关键任务服务,以保证正确性和高可用性,尽管存在任意(拜占庭式)错误。传统的拜占庭容错协议有几个主要的限制。有些协议不支持复制服务之间的互操作性。其他协议在服务之间提供较差的故障隔离,导致跨组织和应用程序边界的级联故障。此外,传统协议不适合具有分层体系结构、长时间运行的计算线程或服务之间异步交互的应用程序。我们提出Perpetual协议,它支持复制服务的拜占庭式容错执行,同时实施严格的故障隔离。Perpetual支持复制服务之间的交互,这些服务可以在长时间运行的计算线程中异步调用和处理远程请求。我们提供了一个模块化实现、一个Axis2 Web Services扩展和实验结果,这些结果表明,复制只会带来适度的开销。
{"title":"Byzantine Fault-Tolerant Web Services for n-Tier and Service Oriented Architectures","authors":"Sajeeva L. Pallemulle, Haraldur D. Thorvaldsson, K. Goldman","doi":"10.1109/ICDCS.2008.94","DOIUrl":"https://doi.org/10.1109/ICDCS.2008.94","url":null,"abstract":"Mission-critical services must be replicated to guarantee correctness and high availability in spite of arbitrary (Byzantine) faults. Traditional Byzantine fault tolerance protocols suffer from several major limitations. Some protocols do not support interoperability between replicated services. Other protocols provide poor fault isolation between services leading to cascading failures across organizational and application boundaries. Moreover, traditional protocols are unsuitable for applications with tiered architectures, long-running threads of computation, or asynchronous interaction between services. We present Perpetual, a protocol that supports Byzantine fault-tolerant execution of replicated services while enforcing strict fault isolation. Perpetual enables interaction between replicated services that may invoke and process remote requests asynchronously in long-running threads of computation. We present a modular implementation, an Axis2 Web Services extension, and experimental results that demonstrate only a moderate overhead due to replication.","PeriodicalId":240205,"journal":{"name":"2008 The 28th International Conference on Distributed Computing Systems","volume":"07 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-06-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131228550","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper, we consider distributed utility maximizing rate allocation in cyber-physical multihop wireless networks carrying prioritized elastic flows with different end-to-end delay requirements. This scenario arises in military wireless networks (dominated by audio and video flows) that must satisfy end-to-end deadlines. Due to the inherent difficulty in providing hard guarantees in such wireless environments, the problem is cast as one of utility maximization, where utility depends on meeting deadlines. Based on a recent result in real-time scheduling, we relate end-to-end delay of prioritized flows to flow rates and priorities, then impose end-to-end delay constraints that can be expressed in a decentralized manner in terms of flow information available locally at each node. The problem of utility maximization in the presence of these constraints is formulated, where utility depends on the ability to meet deadlines. The solution to the network utility maximization (NUM) problem yields a distributed rate control algorithm that nodes can independently execute to collectively maximize global network utility, taking into account delay constraints. Results from simulations demonstrate that a low deadline miss ratio is achieved for real-time packets, without significantly impacting throughput, resulting in a higher total utility compared to a previous state-of-the-art approach.
{"title":"Bandwidth Allocation for Elastic Real-Time Flows in Multihop Wireless Networks Based on Network Utility Maximization","authors":"P. Jayachandran, T. Abdelzaher","doi":"10.1109/ICDCS.2008.86","DOIUrl":"https://doi.org/10.1109/ICDCS.2008.86","url":null,"abstract":"In this paper, we consider distributed utility maximizing rate allocation in cyber-physical multihop wireless networks carrying prioritized elastic flows with different end-to-end delay requirements. This scenario arises in military wireless networks (dominated by audio and video flows) that must satisfy end-to-end deadlines. Due to the inherent difficulty in providing hard guarantees in such wireless environments, the problem is cast as one of utility maximization, where utility depends on meeting deadlines. Based on a recent result in real-time scheduling, we relate end-to-end delay of prioritized flows to flow rates and priorities, then impose end-to-end delay constraints that can be expressed in a decentralized manner in terms of flow information available locally at each node. The problem of utility maximization in the presence of these constraints is formulated, where utility depends on the ability to meet deadlines. The solution to the network utility maximization (NUM) problem yields a distributed rate control algorithm that nodes can independently execute to collectively maximize global network utility, taking into account delay constraints. Results from simulations demonstrate that a low deadline miss ratio is achieved for real-time packets, without significantly impacting throughput, resulting in a higher total utility compared to a previous state-of-the-art approach.","PeriodicalId":240205,"journal":{"name":"2008 The 28th International Conference on Distributed Computing Systems","volume":"05 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-06-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129919027","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}