Ptacek and Newsham [14] showed how to evade signature detection at Intrusion Prevention Systems (IPS) using TCP and IP Fragmentation. These attacks are implemented in tools like FragRoute, and are institutionalized in IPS product tests. The classic defense is for the IPS to reassemble TCP and IP packets,and to consistently normalize the output stream. Current IPS standards require keeping state for 1 million connections. Both the state and processing requirements of reassembly and normalization are barriers to scalability for an IPS at speeds higher than 10 Gbps.In this paper, we suggest breaking with this paradigm using an approach we call Split-Detect. We focus on the simplest form of signature, an exact string match, and start by splitting the signature into pieces. By doing so the attacker is either forced to include at least one piece completely in a packet, or to display potentially abnormal behavior (e.g., several small TCP fragments or out-of-order packets) that cause the attacker's flow to be diverted to a slow path. We prove that under certain assumptions this scheme can detect all byte-string evasions. We also show using real traces that the processing and storage requirements of this scheme can be 10% of that required by a conventional IPS, allowing reasonable cost implementations at 20 Gbps. While the changes required by Split-Detect may be a barrier to adoption, this paper exposes the assumptions that must be changed to avoid normalization and reassembly in the fast path.
{"title":"Detecting evasion attacks at high speeds without reassembly","authors":"G. Varghese, J. Fingerhut, F. Bonomi","doi":"10.1145/1159913.1159951","DOIUrl":"https://doi.org/10.1145/1159913.1159951","url":null,"abstract":"Ptacek and Newsham [14] showed how to evade signature detection at Intrusion Prevention Systems (IPS) using TCP and IP Fragmentation. These attacks are implemented in tools like FragRoute, and are institutionalized in IPS product tests. The classic defense is for the IPS to reassemble TCP and IP packets,and to consistently normalize the output stream. Current IPS standards require keeping state for 1 million connections. Both the state and processing requirements of reassembly and normalization are barriers to scalability for an IPS at speeds higher than 10 Gbps.In this paper, we suggest breaking with this paradigm using an approach we call Split-Detect. We focus on the simplest form of signature, an exact string match, and start by splitting the signature into pieces. By doing so the attacker is either forced to include at least one piece completely in a packet, or to display potentially abnormal behavior (e.g., several small TCP fragments or out-of-order packets) that cause the attacker's flow to be diverted to a slow path. We prove that under certain assumptions this scheme can detect all byte-string evasions. We also show using real traces that the processing and storage requirements of this scheme can be 10% of that required by a conventional IPS, allowing reasonable cost implementations at 20 Gbps. While the changes required by Split-Detect may be a barrier to adoption, this paper exposes the assumptions that must be changed to avoid normalization and reassembly in the fast path.","PeriodicalId":109155,"journal":{"name":"Proceedings of the 2006 conference on Applications, technologies, architectures, and protocols for computer communications","volume":"51 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-08-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116523467","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Fast-growing Internet applications like streaming media and telephony prefer timeliness to reliability, making TCP a poor fit. Unfortunately, UDP, the natural alternative, lacks congestion control. High-bandwidth UDP applications must implement congestion control themselves-a difficult task-or risk rendering congested networks unusable. We set out to ease the safe deployment of these applications by designing a congestion-controlled unreliable transport protocol. The outcome, the Datagram Congestion Control Protocol or DCCP, adds to a UDP-like foundation the minimum mechanisms necessary to support congestion control. We thought those mechanisms would resemble TCP's, but without reliability and, especially, cumulative acknowledgements, we had to reconsider almost every aspect of TCP's design. The resulting protocol sheds light on how congestion control interacts with unreliable transport, how modern network constraints impact protocol design, and how TCP's reliable bytestream semantics intertwine with its other mechanisms, including congestion control.
{"title":"Designing DCCP: congestion control without reliability","authors":"E. Kohler, M. Handley, S. Floyd","doi":"10.1145/1159913.1159918","DOIUrl":"https://doi.org/10.1145/1159913.1159918","url":null,"abstract":"Fast-growing Internet applications like streaming media and telephony prefer timeliness to reliability, making TCP a poor fit. Unfortunately, UDP, the natural alternative, lacks congestion control. High-bandwidth UDP applications must implement congestion control themselves-a difficult task-or risk rendering congested networks unusable. We set out to ease the safe deployment of these applications by designing a congestion-controlled unreliable transport protocol. The outcome, the Datagram Congestion Control Protocol or DCCP, adds to a UDP-like foundation the minimum mechanisms necessary to support congestion control. We thought those mechanisms would resemble TCP's, but without reliability and, especially, cumulative acknowledgements, we had to reconsider almost every aspect of TCP's design. The resulting protocol sheds light on how congestion control interacts with unreliable transport, how modern network constraints impact protocol design, and how TCP's reliable bytestream semantics intertwine with its other mechanisms, including congestion control.","PeriodicalId":109155,"journal":{"name":"Proceedings of the 2006 conference on Applications, technologies, architectures, and protocols for computer communications","volume":"54 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-08-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126438497","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We present the design of a routing system in which end-systems set tags to select non-shortest path routes as an alternative to explicit source routes. Routers collectively generate these routes by using tags as hints to independently deflect packets to neighbors that lie off the shortest-path. We show how this can be done simply, by local extensions of the shortest path machinery, and safely, so that loops are provably not formed. The result is to provide end-systems with a high-level of path diversity that allows them to bypass unde-sirable locations within the network. Unlike explicit source routing, our scheme is inherently scalable and compatible with ISP policies because it derives from the deployed Internet routing. We also sug-gest an encoding that is compatible with common IP usage, making our scheme incrementally deployable at the granularity of individual routers.
{"title":"Source selectable path diversity via routing deflections","authors":"Xiaowei Yang, D. Wetherall","doi":"10.1145/1159913.1159933","DOIUrl":"https://doi.org/10.1145/1159913.1159933","url":null,"abstract":"We present the design of a routing system in which end-systems set tags to select non-shortest path routes as an alternative to explicit source routes. Routers collectively generate these routes by using tags as hints to independently deflect packets to neighbors that lie off the shortest-path. We show how this can be done simply, by local extensions of the shortest path machinery, and safely, so that loops are provably not formed. The result is to provide end-systems with a high-level of path diversity that allows them to bypass unde-sirable locations within the network. Unlike explicit source routing, our scheme is inherently scalable and compatible with ISP policies because it derives from the deployed Internet routing. We also sug-gest an encoding that is compatible with common IP usage, making our scheme incrementally deployable at the granularity of individual routers.","PeriodicalId":109155,"journal":{"name":"Proceedings of the 2006 conference on Applications, technologies, architectures, and protocols for computer communications","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-08-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123924646","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Sensor networks are especially useful in catastrophic or emergency scenarios such as floods, fires, terrorist attacks or earthquakes where human participation may be too dangerous. However, such disaster scenarios pose an interesting design challenge since the sensor nodes used to collect and communicate data may themselves fail suddenly and unpredictably, resulting in the loss of valuable data. Furthermore, because these networks are often expected to be deployed in response to a disaster, or because of sudden configuration changes due to failure, these networks are often expected to operate in a "zero-configuration" paradigm, where data collection and transmission must be initiated immediately, before the nodes have a chance to assess the current network topology. In this paper, we design and analyze techniques to increase "persistence" of sensed data, so that data is more likely to reach a data sink, even as network nodes fail. This is done by replicating data compactly at neighboring nodes using novel "Growth Codes" that increase in efficiency as data accumulates at the sink. We show that Growth Codes preserve more data in the presence of node failures than previously proposed erasure resilient techniques.
{"title":"Growth codes: maximizing sensor network data persistence","authors":"A. Kamra, V. Misra, Jon Feldman, D. Rubenstein","doi":"10.1145/1159913.1159943","DOIUrl":"https://doi.org/10.1145/1159913.1159943","url":null,"abstract":"Sensor networks are especially useful in catastrophic or emergency scenarios such as floods, fires, terrorist attacks or earthquakes where human participation may be too dangerous. However, such disaster scenarios pose an interesting design challenge since the sensor nodes used to collect and communicate data may themselves fail suddenly and unpredictably, resulting in the loss of valuable data. Furthermore, because these networks are often expected to be deployed in response to a disaster, or because of sudden configuration changes due to failure, these networks are often expected to operate in a \"zero-configuration\" paradigm, where data collection and transmission must be initiated immediately, before the nodes have a chance to assess the current network topology. In this paper, we design and analyze techniques to increase \"persistence\" of sensed data, so that data is more likely to reach a data sink, even as network nodes fail. This is done by replicating data compactly at neighboring nodes using novel \"Growth Codes\" that increase in efficiency as data accumulates at the sink. We show that Growth Codes preserve more data in the presence of node failures than previously proposed erasure resilient techniques.","PeriodicalId":109155,"journal":{"name":"Proceedings of the 2006 conference on Applications, technologies, architectures, and protocols for computer communications","volume":"41 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-08-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124300274","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Charles Reis, Ratul Mahajan, Maya Rodrig, D. Wetherall, J. Zahorjan
We present practical models for the physical layer behaviors of packet reception and carrier sense with interference in static wireless networks. These models use measurements of a real network rather than abstract RF propagation models as the basis for accuracy in complex environments. Seeding our models requires N trials in an N node network, in which each sender transmits in turn and receivers measure RSSI values and packet counts, both of which are easily obtainable. The models then predict packet delivery and throughput in the same network for different sets of transmitters with the same node placements. We evaluate our models for the base case of two senders that broadcast packets simultaneously. We find that they are effective at predicting when there will be significant interference effects. Across many predictions, we obtain an RMS error for 802.11a and 802.11b of a half and a third, respectively, of a measurement-based model that ignores interference.
{"title":"Measurement-based models of delivery and interference in static wireless networks","authors":"Charles Reis, Ratul Mahajan, Maya Rodrig, D. Wetherall, J. Zahorjan","doi":"10.1145/1159913.1159921","DOIUrl":"https://doi.org/10.1145/1159913.1159921","url":null,"abstract":"We present practical models for the physical layer behaviors of packet reception and carrier sense with interference in static wireless networks. These models use measurements of a real network rather than abstract RF propagation models as the basis for accuracy in complex environments. Seeding our models requires N trials in an N node network, in which each sender transmits in turn and receivers measure RSSI values and packet counts, both of which are easily obtainable. The models then predict packet delivery and throughput in the same network for different sets of transmitters with the same node placements. We evaluate our models for the base case of two senders that broadcast packets simultaneously. We find that they are effective at predicting when there will be significant interference effects. Across many predictions, we obtain an RMS error for 802.11a and 802.11b of a half and a third, respectively, of a measurement-based model that ignores interference.","PeriodicalId":109155,"journal":{"name":"Proceedings of the 2006 conference on Applications, technologies, architectures, and protocols for computer communications","volume":"73 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-08-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127660222","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
M. Caesar, Tyson Condie, Jayanthkumar Kannan, K. Lakshminarayanan, I. Stoica
It is accepted wisdom that the current Internet architecture conflates network locations and host identities, but there is no agreement on how a future architecture should distinguish the two. One could sidestep this quandary by routing directly on host identities themselves, and eliminating the need for network-layer protocols to include any mention of network location. The key to achieving this is the ability to route on flat labels. In this paper we take an initial stab at this challenge, proposing and analyzing our ROFL routing algorithm. While its scaling and efficiency properties are far from ideal, our results suggest that the idea of routing on flat labels cannot be immediately dismissed.
{"title":"ROFL: routing on flat labels","authors":"M. Caesar, Tyson Condie, Jayanthkumar Kannan, K. Lakshminarayanan, I. Stoica","doi":"10.1145/1159913.1159955","DOIUrl":"https://doi.org/10.1145/1159913.1159955","url":null,"abstract":"It is accepted wisdom that the current Internet architecture conflates network locations and host identities, but there is no agreement on how a future architecture should distinguish the two. One could sidestep this quandary by routing directly on host identities themselves, and eliminating the need for network-layer protocols to include any mention of network location. The key to achieving this is the ability to route on flat labels. In this paper we take an initial stab at this challenge, proposing and analyzing our ROFL routing algorithm. While its scaling and efficiency properties are far from ideal, our results suggest that the idea of routing on flat labels cannot be immediately dismissed.","PeriodicalId":109155,"journal":{"name":"Proceedings of the 2006 conference on Applications, technologies, architectures, and protocols for computer communications","volume":"2014 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-08-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127460546","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Feng Wang, Z. Morley Mao, Jia Wang, Lixin Gao, R. Bush
Extensive measurement studies have shown that end-to-end Internet path performance degradation is correlated with routing dynamics. However, the root cause of the correlation between routing dynamics and such performance degradation is poorly understood. In particular, how do routing changes result in degraded end-to-end path performance in the first place? How do factors such as topological properties, routing policies, and iBGP configurations affect the extent to which such routing events can cause performance degradation? Answers to these questions are critical for improving network performance.In this paper, we conduct extensive measurement that involves both controlled routing updates through two tier-1 ISPs and active probes of a diverse set of end-to-end paths on the Internet. We find that routing changes contribute to end-to-end packet loss significantly. Specifically, we study failover events in which a link failure leads to a routing change and recovery events in which a link repair causes a routing change. In both cases, it is possible to experience data plane performance degradation in terms of increased long loss burst as well as forwarding loops. Furthermore, we find that common routing policies and iBGP configurations of ISPs can directly affect the end-to-end path performance during routing changes. Our work provides new insights into potential measures that network operators can undertake to enhance network performance.
{"title":"A measurement study on the impact of routing events on end-to-end internet path performance","authors":"Feng Wang, Z. Morley Mao, Jia Wang, Lixin Gao, R. Bush","doi":"10.1145/1159913.1159956","DOIUrl":"https://doi.org/10.1145/1159913.1159956","url":null,"abstract":"Extensive measurement studies have shown that end-to-end Internet path performance degradation is correlated with routing dynamics. However, the root cause of the correlation between routing dynamics and such performance degradation is poorly understood. In particular, how do routing changes result in degraded end-to-end path performance in the first place? How do factors such as topological properties, routing policies, and iBGP configurations affect the extent to which such routing events can cause performance degradation? Answers to these questions are critical for improving network performance.In this paper, we conduct extensive measurement that involves both controlled routing updates through two tier-1 ISPs and active probes of a diverse set of end-to-end paths on the Internet. We find that routing changes contribute to end-to-end packet loss significantly. Specifically, we study failover events in which a link failure leads to a routing change and recovery events in which a link repair causes a routing change. In both cases, it is possible to experience data plane performance degradation in terms of increased long loss burst as well as forwarding loops. Furthermore, we find that common routing policies and iBGP configurations of ISPs can directly affect the end-to-end path performance during routing changes. Our work provides new insights into potential measures that network operators can undertake to enhance network performance.","PeriodicalId":109155,"journal":{"name":"Proceedings of the 2006 conference on Applications, technologies, architectures, and protocols for computer communications","volume":"142 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-08-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121963364","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This work focuses on capacity overprovisioning (CO) as an alternative to admission control (AC) to implement quality of service (QoS) in packet-switched communication networks. CO prevents potential overload while AC protects the QoS of the traffic during overload situations. Overload may be caused, e. g., by uctuations of the traffic rate on a link due to its normal stochastic behavior (a), by traffic shifts within the network due to popular contents (b), or by redirected traffic due to network failures (c). Capacity dimensioning methods for CO need to take into account all potential sources of overload while AC can block excess traffic caused by (a) and (b) if the capacity does not suffice. The contributions of this paper are (1) the presentation of a capacity dimensioning method for networks with resilience requirements and changing traffic matrices, (2) the investigation of the impact of the mentioned sources of overload (a-c) on the required capacity for CO in networks with and without resilience requirements, and (3) a comparison of this equired capacity with the one for AC. Our results show that in the presence of strong traffic shifts CO requires more capacity than AC. However, if resilience against network failures is required, both CO and AC need additional backup capacity for the redirected traffic. In this case, CO can use the backup capacity to absorb other types of overload. As a consequence, CO and AC have similar bandwidth requirements. These findings are robust against the network size.
{"title":"Capacity overprovisioning for networks with resilience requirements","authors":"M. Menth, Rüdiger Martin, J. Charzinski","doi":"10.1145/1159913.1159925","DOIUrl":"https://doi.org/10.1145/1159913.1159925","url":null,"abstract":"This work focuses on capacity overprovisioning (CO) as an alternative to admission control (AC) to implement quality of service (QoS) in packet-switched communication networks. CO prevents potential overload while AC protects the QoS of the traffic during overload situations. Overload may be caused, e. g., by uctuations of the traffic rate on a link due to its normal stochastic behavior (a), by traffic shifts within the network due to popular contents (b), or by redirected traffic due to network failures (c). Capacity dimensioning methods for CO need to take into account all potential sources of overload while AC can block excess traffic caused by (a) and (b) if the capacity does not suffice. The contributions of this paper are (1) the presentation of a capacity dimensioning method for networks with resilience requirements and changing traffic matrices, (2) the investigation of the impact of the mentioned sources of overload (a-c) on the required capacity for CO in networks with and without resilience requirements, and (3) a comparison of this equired capacity with the one for AC. Our results show that in the presence of strong traffic shifts CO requires more capacity than AC. However, if resilience against network failures is required, both CO and AC need additional backup capacity for the redirected traffic. In this case, CO can use the backup capacity to absorb other types of overload. As a consequence, CO and AC have similar bandwidth requirements. These findings are robust against the network size.","PeriodicalId":109155,"journal":{"name":"Proceedings of the 2006 conference on Applications, technologies, architectures, and protocols for computer communications","volume":"82 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-08-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134137924","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Sumit Rangwala, R. Gummadi, R. Govindan, K. Psounis
In a wireless sensor network of N nodes transmitting data to a single base station, possibly over multiple hops, what distributed mechanisms should be implemented in order to dynamically allocate fair and efficient transmission rates to each node? Our interferenceaware fair rate control (IFRC) detects incipient congestion at a node by monitoring the average queue length, communicates congestion state to exactly the set of potential interferers using a novel low-overhead congestion sharing mechanism, and converges to a fair and efficient rate using an AIMD control law. We evaluate IFRC extensively on a 40-node wireless sensor network testbed. IFRC achieves a fair and efficient rate allocation that is within 20-40% of the optimal fair rate allocation on some network topologies. Its rate adaptation mechanism is highly effective: we did not observe a single instance of queue overflow in our many experiments. Finally, IFRC can be extended easily to support situations where only a subset of the nodes transmit, where the network has multiple base stations, or where nodes are assigned different transmission weights.
{"title":"Interference-aware fair rate control in wireless sensor networks","authors":"Sumit Rangwala, R. Gummadi, R. Govindan, K. Psounis","doi":"10.1145/1159913.1159922","DOIUrl":"https://doi.org/10.1145/1159913.1159922","url":null,"abstract":"In a wireless sensor network of N nodes transmitting data to a single base station, possibly over multiple hops, what distributed mechanisms should be implemented in order to dynamically allocate fair and efficient transmission rates to each node? Our interferenceaware fair rate control (IFRC) detects incipient congestion at a node by monitoring the average queue length, communicates congestion state to exactly the set of potential interferers using a novel low-overhead congestion sharing mechanism, and converges to a fair and efficient rate using an AIMD control law. We evaluate IFRC extensively on a 40-node wireless sensor network testbed. IFRC achieves a fair and efficient rate allocation that is within 20-40% of the optimal fair rate allocation on some network topologies. Its rate adaptation mechanism is highly effective: we did not observe a single instance of queue overflow in our many experiments. Finally, IFRC can be extended easily to support situations where only a subset of the nodes transmit, where the network has multiple base stations, or where nodes are assigned different transmission weights.","PeriodicalId":109155,"journal":{"name":"Proceedings of the 2006 conference on Applications, technologies, architectures, and protocols for computer communications","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-08-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122041031","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
It has been reported worldwide that peer-to-peer traffic is taking up a significant portion of backbone networks. In particular, it is prominent in Japan because of the high penetration rate of fiber-based broadband access. In this paper, we first report aggregated traffic measurements collected over 21 months from seven ISPs covering 42% of the Japanese backbone traffic. The backbone is dominated by symmetric residential traffic which increased 37%in 2005. We further investigate residential per-customer trafficc in one of the ISPs by comparing DSL and fiber users, heavy-hitters and normal users, and geographic traffic matrices. The results reveal that a small segment of users dictate the overall behavior; 4% of heavy-hitters account for 75% of the inbound volume, and the fiber users account for 86%of the inbound volume. About 63%of the total residential volume is user-to-user traffic. The dominant applications exhibit poor locality and communicate with a wide range and number of peers. The distribution of heavy-hitters is heavy-tailed without a clear boundary between heavy-hitters and normal users, which suggests that users start playing with peer-to-peer applications, become heavy-hitters, and eventually shift from DSL to fiber. We provide conclusive empirical evidence from a large and diverse set of commercial backbone data that the emergence of new attractive applications has drastically affected traffic usage and capacity engineering requirements.
{"title":"The impact and implications of the growth in residential user-to-user traffic","authors":"Kenjiro Cho, K. Fukuda, H. Esaki, A. Kato","doi":"10.1145/1159913.1159938","DOIUrl":"https://doi.org/10.1145/1159913.1159938","url":null,"abstract":"It has been reported worldwide that peer-to-peer traffic is taking up a significant portion of backbone networks. In particular, it is prominent in Japan because of the high penetration rate of fiber-based broadband access. In this paper, we first report aggregated traffic measurements collected over 21 months from seven ISPs covering 42% of the Japanese backbone traffic. The backbone is dominated by symmetric residential traffic which increased 37%in 2005. We further investigate residential per-customer trafficc in one of the ISPs by comparing DSL and fiber users, heavy-hitters and normal users, and geographic traffic matrices. The results reveal that a small segment of users dictate the overall behavior; 4% of heavy-hitters account for 75% of the inbound volume, and the fiber users account for 86%of the inbound volume. About 63%of the total residential volume is user-to-user traffic. The dominant applications exhibit poor locality and communicate with a wide range and number of peers. The distribution of heavy-hitters is heavy-tailed without a clear boundary between heavy-hitters and normal users, which suggests that users start playing with peer-to-peer applications, become heavy-hitters, and eventually shift from DSL to fiber. We provide conclusive empirical evidence from a large and diverse set of commercial backbone data that the emergence of new attractive applications has drastically affected traffic usage and capacity engineering requirements.","PeriodicalId":109155,"journal":{"name":"Proceedings of the 2006 conference on Applications, technologies, architectures, and protocols for computer communications","volume":"44 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-08-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121722473","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}