Mustafa Y. Arslan, K. Pelechrinis, Ioannis Broustis, S. Krishnamurthy, Sateesh Addepalli, K. Papagiannaki
Channel Bonding (CB) combines two adjacent frequency bands to form a new, wider band to facilitate high data rate transmissions in MIMO-based 802.11n networks. However, the use of a wider band with CB can exacerbate interference effects. Furthermore, CB does not always provide benefits in interference-free settings, and can even degrade performance in some cases. We conduct an in-depth, experimental study to understand the implications of CB. Based on this study we design an auto-configuration framework, ACORN, for enterprise 802.11n WLANs. ACORN integrates the functions of user association and channel allocation, since our study reveals that they are tightly coupled when CB is used. We show that the channel allocation problem with the constraints of CB is NP-complete. Thus, ACORN uses an algorithm that provides a worst case approximation ratio of [EQUATION] with Δ being the maximum node degree in the network. We implement ACORN on our 802.11n testbed. Our experiments show that ACORN (i) outperforms previous approaches that are agnostic to CB constraints; it provides per-AP throughput gains from 1.5x to 6x and (ii) in practice, its channel allocation module achieves an approximation ratio much better than [EQUATION].
{"title":"Auto-configuration of 802.11n WLANs","authors":"Mustafa Y. Arslan, K. Pelechrinis, Ioannis Broustis, S. Krishnamurthy, Sateesh Addepalli, K. Papagiannaki","doi":"10.1145/1921168.1921204","DOIUrl":"https://doi.org/10.1145/1921168.1921204","url":null,"abstract":"Channel Bonding (CB) combines two adjacent frequency bands to form a new, wider band to facilitate high data rate transmissions in MIMO-based 802.11n networks. However, the use of a wider band with CB can exacerbate interference effects. Furthermore, CB does not always provide benefits in interference-free settings, and can even degrade performance in some cases. We conduct an in-depth, experimental study to understand the implications of CB. Based on this study we design an auto-configuration framework, ACORN, for enterprise 802.11n WLANs. ACORN integrates the functions of user association and channel allocation, since our study reveals that they are tightly coupled when CB is used. We show that the channel allocation problem with the constraints of CB is NP-complete. Thus, ACORN uses an algorithm that provides a worst case approximation ratio of [EQUATION] with Δ being the maximum node degree in the network. We implement ACORN on our 802.11n testbed. Our experiments show that ACORN (i) outperforms previous approaches that are agnostic to CB constraints; it provides per-AP throughput gains from 1.5x to 6x and (ii) in practice, its channel allocation module achieves an approximation ratio much better than [EQUATION].","PeriodicalId":20688,"journal":{"name":"Proceedings of The 6th International Conference on Innovation in Science and Technology","volume":"72 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2010-11-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88479277","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Bin Fan, D. Andersen, M. Kaminsky, K. Papagiannaki
Peer-to-peer has emerged in recent years as a promising approach to providing Video-on-Demand streaming. The design space, however, is vast and still not well understood---yet choosing the right approach is critical to system performance. This paper takes a fresh look at the p2p VoD design space using a simple analytical model that focuses on the allocation of uplink bandwidth resource for different chunks across peers. We describe a fundamental tradeoff that exists between system throughput, sequentiality of downloaded content and robustness to heterogeneous network conditions and node capacities, and we prove that no system can achieve all three simultaneously. Empirical results from Emulab confirm the analysis and show how one might implement efficient peer-to-peer VoD streaming with an appropriate balance of the tradeoff.
{"title":"Balancing throughput, robustness, and in-order delivery in P2P VoD","authors":"Bin Fan, D. Andersen, M. Kaminsky, K. Papagiannaki","doi":"10.1145/1921168.1921182","DOIUrl":"https://doi.org/10.1145/1921168.1921182","url":null,"abstract":"Peer-to-peer has emerged in recent years as a promising approach to providing Video-on-Demand streaming. The design space, however, is vast and still not well understood---yet choosing the right approach is critical to system performance. This paper takes a fresh look at the p2p VoD design space using a simple analytical model that focuses on the allocation of uplink bandwidth resource for different chunks across peers. We describe a fundamental tradeoff that exists between system throughput, sequentiality of downloaded content and robustness to heterogeneous network conditions and node capacities, and we prove that no system can achieve all three simultaneously. Empirical results from Emulab confirm the analysis and show how one might implement efficient peer-to-peer VoD streaming with an appropriate balance of the tradeoff.","PeriodicalId":20688,"journal":{"name":"Proceedings of The 6th International Conference on Innovation in Science and Technology","volume":"11 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2010-11-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90460162","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
It is our great pleasure to welcome you to ACM CoNEXT 2010, the 6th International COnference on emerging Networking EXperiments and Technologies. The conference is hosted by Drexel University and is held in the City of Brotherly Love, Philadelphia. We hope that the vibe of the city in combination with an outstanding technical and social program will lead to stimulating discussions and exchange of ideas among the attending members of our world-spanning community. This year's edition continues the tradition of CoNEXT to foster the scientific and technological intersection between different research communities in networking from both academia and industry. The main conference is preceded by two interesting workshops that reflect the communities' renewed interest in fundamentally rethinking network architecture (ReArch) and what networking elements should and can do (PRESTO). In addition to those, we also have a student workshop to give the next generation of networking researchers a platform to discuss their work in an open and informal setting. The main conference is again organized along a single track of thematically grouped sessions to foster the interaction and discussions among all participants and across different points of views. CoNEXT strives to be an affordable conference and we managed to keep registration fees at the same level for the third consecutive year, despite a tough climate for attracting supporters. We are especially grateful to CISCO, for its outstanding commitment to research and education by becoming a Gold Supporter of ACM CoNEXT 2010; many thanks to our Bronze Supporter AT&T, and also to our Patrons: INTEL, NICTA and Drexel University. We thank the National Science Foundation and SIGCOMM for providing generous support for student travel grants.
{"title":"Proceedings of the 6th International COnference","authors":"J. D. Oliveira, M. Ott, T. Griffin, M. Médard","doi":"10.1145/1921168","DOIUrl":"https://doi.org/10.1145/1921168","url":null,"abstract":"It is our great pleasure to welcome you to ACM CoNEXT 2010, the 6th International COnference on emerging Networking EXperiments and Technologies. The conference is hosted by Drexel University and is held in the City of Brotherly Love, Philadelphia. We hope that the vibe of the city in combination with an outstanding technical and social program will lead to stimulating discussions and exchange of ideas among the attending members of our world-spanning community. \u0000 \u0000This year's edition continues the tradition of CoNEXT to foster the scientific and technological intersection between different research communities in networking from both academia and industry. The main conference is preceded by two interesting workshops that reflect the communities' renewed interest in fundamentally rethinking network architecture (ReArch) and what networking elements should and can do (PRESTO). In addition to those, we also have a student workshop to give the next generation of networking researchers a platform to discuss their work in an open and informal setting. The main conference is again organized along a single track of thematically grouped sessions to foster the interaction and discussions among all participants and across different points of views. \u0000 \u0000CoNEXT strives to be an affordable conference and we managed to keep registration fees at the same level for the third consecutive year, despite a tough climate for attracting supporters. We are especially grateful to CISCO, for its outstanding commitment to research and education by becoming a Gold Supporter of ACM CoNEXT 2010; many thanks to our Bronze Supporter AT&T, and also to our Patrons: INTEL, NICTA and Drexel University. We thank the National Science Foundation and SIGCOMM for providing generous support for student travel grants.","PeriodicalId":20688,"journal":{"name":"Proceedings of The 6th International Conference on Innovation in Science and Technology","volume":"39 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2010-11-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"91181492","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
V. Sekar, Ravishankar Krishnaswamy, Anupam Gupta, M. Reiter
Traditional efforts for scaling network intrusion detection (NIDS) and intrusion prevention systems (NIPS) have largely focused on a single-vantage-point view. In this paper, we explore an alternative design that exploits spatial, network-wide opportunities for distributing NIDS and NIPS functions. For the NIDS case, we design a linear programming formulation to assign detection responsibilities to nodes while ensuring that no node is overloaded. We describe a prototype NIDS implementation adapted from the Bro system to analyze traffic per these assignments, and demonstrate the advantages that this approach achieves. For NIPS, we show how to maximally leverage specialized hardware (e.g., TCAMs) to reduce the footprint of unwanted traffic on the network. Such hardware constraints make the optimization problem NP-hard, and we provide practical approximation algorithms based on randomized rounding.
{"title":"Network-wide deployment of intrusion detection and prevention systems","authors":"V. Sekar, Ravishankar Krishnaswamy, Anupam Gupta, M. Reiter","doi":"10.1145/1921168.1921192","DOIUrl":"https://doi.org/10.1145/1921168.1921192","url":null,"abstract":"Traditional efforts for scaling network intrusion detection (NIDS) and intrusion prevention systems (NIPS) have largely focused on a single-vantage-point view. In this paper, we explore an alternative design that exploits spatial, network-wide opportunities for distributing NIDS and NIPS functions. For the NIDS case, we design a linear programming formulation to assign detection responsibilities to nodes while ensuring that no node is overloaded. We describe a prototype NIDS implementation adapted from the Bro system to analyze traffic per these assignments, and demonstrate the advantages that this approach achieves. For NIPS, we show how to maximally leverage specialized hardware (e.g., TCAMs) to reduce the footprint of unwanted traffic on the network. Such hardware constraints make the optimization problem NP-hard, and we provide practical approximation algorithms based on randomized rounding.","PeriodicalId":20688,"journal":{"name":"Proceedings of The 6th International Conference on Innovation in Science and Technology","volume":"60 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2010-11-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78142697","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Evaluating anomaly detectors is a crucial task in traffic monitoring made particularly difficult due to the lack of ground truth. The goal of the present article is to assist researchers in the evaluation of detectors by providing them with labeled anomaly traffic traces. We aim at automatically finding anomalies in the MAWI archive using a new methodology that combines different and independent detectors. A key challenge is to compare the alarms raised by these detectors, though they operate at different traffic granularities. The main contribution is to propose a reliable graph-based methodology that combines any anomaly detector outputs. We evaluated four unsupervised combination strategies; the best is the one that is based on dimensionality reduction. The synergy between anomaly detectors permits to detect twice as many anomalies as the most accurate detector, and to reject numerous false positive alarms reported by the detectors. Significant anomalous traffic features are extracted from reported alarms, hence the labels assigned to the MAWI archive are concise. The results on the MAWI traffic are publicly available and updated daily. Also, this approach permits to include the results of upcoming anomaly detectors so as to improve over time the quality and variety of labels.
{"title":"MAWILab: combining diverse anomaly detectors for automated anomaly labeling and performance benchmarking","authors":"Romain Fontugne, P. Borgnat, P. Abry, K. Fukuda","doi":"10.1145/1921168.1921179","DOIUrl":"https://doi.org/10.1145/1921168.1921179","url":null,"abstract":"Evaluating anomaly detectors is a crucial task in traffic monitoring made particularly difficult due to the lack of ground truth. The goal of the present article is to assist researchers in the evaluation of detectors by providing them with labeled anomaly traffic traces. We aim at automatically finding anomalies in the MAWI archive using a new methodology that combines different and independent detectors. A key challenge is to compare the alarms raised by these detectors, though they operate at different traffic granularities. The main contribution is to propose a reliable graph-based methodology that combines any anomaly detector outputs. We evaluated four unsupervised combination strategies; the best is the one that is based on dimensionality reduction. The synergy between anomaly detectors permits to detect twice as many anomalies as the most accurate detector, and to reject numerous false positive alarms reported by the detectors. Significant anomalous traffic features are extracted from reported alarms, hence the labels assigned to the MAWI archive are concise. The results on the MAWI traffic are publicly available and updated daily. Also, this approach permits to include the results of upcoming anomaly detectors so as to improve over time the quality and variety of labels.","PeriodicalId":20688,"journal":{"name":"Proceedings of The 6th International Conference on Innovation in Science and Technology","volume":"10 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2010-11-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82006632","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Client-server networks are pervasive, fundamental, and include such key networks as the Internet, power grids, and road networks. In a client-server network, clients obtain a service by connecting to one of a redundant set of servers. These networks are vulnerable to node and link failures, causing some clients to become disconnected from the servers. We develop algorithms that quantify and bound the inherent vulnerability of a clientserver network using semidefinite programming (SDP) and branch-and-cut techniques. Further, we develop a divide-and-conquer algorithm that solves the problem for large graphs. We use these techniques to show that: for the Philippine Power Grid removing just over 6% of the transmission lines will disconnect at least 20% but not more than 50% of the substations from all generators; on a large wireless mesh network disrupting 5% of wireless links between relays removes Internet access for half the relays; even after any 16% of Tier 2 ASes are removed, more than 50% of the remaining Tier 2 ASes will be connected to the Tier 1 backbone; when 300 roadblocks are erected in Michigan, it's possible to disconnect 28--43% of the population from all airports.
{"title":"Assessing the vulnerability of replicated network services","authors":"G. Bissias, B. Levine, R. Sitaraman","doi":"10.1145/1921168.1921200","DOIUrl":"https://doi.org/10.1145/1921168.1921200","url":null,"abstract":"Client-server networks are pervasive, fundamental, and include such key networks as the Internet, power grids, and road networks. In a client-server network, clients obtain a service by connecting to one of a redundant set of servers. These networks are vulnerable to node and link failures, causing some clients to become disconnected from the servers. We develop algorithms that quantify and bound the inherent vulnerability of a clientserver network using semidefinite programming (SDP) and branch-and-cut techniques. Further, we develop a divide-and-conquer algorithm that solves the problem for large graphs. We use these techniques to show that: for the Philippine Power Grid removing just over 6% of the transmission lines will disconnect at least 20% but not more than 50% of the substations from all generators; on a large wireless mesh network disrupting 5% of wireless links between relays removes Internet access for half the relays; even after any 16% of Tier 2 ASes are removed, more than 50% of the remaining Tier 2 ASes will be connected to the Tier 1 backbone; when 300 roadblocks are erected in Michigan, it's possible to disconnect 28--43% of the population from all airports.","PeriodicalId":20688,"journal":{"name":"Proceedings of The 6th International Conference on Innovation in Science and Technology","volume":"70 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2010-11-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78642008","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Fundamental limitations of traditional data center network architectures have led to the development of architectures that provide enormous bisection bandwidth for up to hundreds of thousands of servers. Because these architectures rely on homogeneous switches, implementing one in a legacy data center usually requires replacing most existing switches. Such forklift upgrades are typically prohibitively expensive; instead, a data center manager should be able to selectively add switches to boost bisection bandwidth. Doing so adds heterogeneity to the network's switches and heterogeneous high-performance interconnection topologies are not well understood. Therefore, we develop the theory of heterogeneous Clos networks. We show that our construction needs only as much link capacity as the classic Clos network to route the same traffic matrices and this bound is the optimal. Placing additional equipment in a highly constrained data center is challenging in practice, however. We propose LEGUP to design the topology and physical arrangement of such network upgrades or expansions. Compared to current solutions, we show that LEGUP finds network upgrades with more bisection bandwidth for half the cost. And when expanding a data center iteratively, LEGUP's network has 265% more bisection bandwidth than an iteratively upgraded fat-tree.
{"title":"LEGUP: using heterogeneity to reduce the cost of data center network upgrades","authors":"Andrew R. Curtis, S. Keshav, A. López-Ortiz","doi":"10.1145/1921168.1921187","DOIUrl":"https://doi.org/10.1145/1921168.1921187","url":null,"abstract":"Fundamental limitations of traditional data center network architectures have led to the development of architectures that provide enormous bisection bandwidth for up to hundreds of thousands of servers. Because these architectures rely on homogeneous switches, implementing one in a legacy data center usually requires replacing most existing switches. Such forklift upgrades are typically prohibitively expensive; instead, a data center manager should be able to selectively add switches to boost bisection bandwidth. Doing so adds heterogeneity to the network's switches and heterogeneous high-performance interconnection topologies are not well understood. Therefore, we develop the theory of heterogeneous Clos networks. We show that our construction needs only as much link capacity as the classic Clos network to route the same traffic matrices and this bound is the optimal. Placing additional equipment in a highly constrained data center is challenging in practice, however. We propose LEGUP to design the topology and physical arrangement of such network upgrades or expansions. Compared to current solutions, we show that LEGUP finds network upgrades with more bisection bandwidth for half the cost. And when expanding a data center iteratively, LEGUP's network has 265% more bisection bandwidth than an iteratively upgraded fat-tree.","PeriodicalId":20688,"journal":{"name":"Proceedings of The 6th International Conference on Innovation in Science and Technology","volume":"36 3 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2010-11-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77477174","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
R. C. Rumín, M. Kryczka, Ángel Cuevas, S. Kaune, Carmen Guerrero, R. Rejaie
BitTorrent is the most popular P2P content delivery application where individual users share various type of content with tens of thousands of other users. The growing popularity of BitTorrent is primarily due to the availability of valuable content without any cost for the consumers. However, apart from required resources, publishing (sharing) valuable (and often copyrighted) content has serious legal implications for users who publish the material (or publishers). This raises a question that whether (at least major) content publishers behave in an altruistic fashion or have other incentives such as financial. In this study, we identify the content publishers of more than 55K torrents in two major BitTorrent portals and examine their behavior. We demonstrate that a small fraction of publishers is responsible for 67% of the published content and 75% of the downloads. Our investigations reveal that these major publishers respond to two different profiles. On the one hand, antipiracy agencies and malicious publishers publish a large amount of fake files to protect copyrighted content and spread malware respectively. On the other hand, content publishing in BitTorrent is largely driven by companies with financial incentives. Therefore, if these companies lose their interest or are unable to publish content, BitTorrent traffic/portals may disappear or at least their associated traffic will be significantly reduced.
{"title":"Is content publishing in BitTorrent altruistic or profit-driven?","authors":"R. C. Rumín, M. Kryczka, Ángel Cuevas, S. Kaune, Carmen Guerrero, R. Rejaie","doi":"10.1145/1921168.1921183","DOIUrl":"https://doi.org/10.1145/1921168.1921183","url":null,"abstract":"BitTorrent is the most popular P2P content delivery application where individual users share various type of content with tens of thousands of other users. The growing popularity of BitTorrent is primarily due to the availability of valuable content without any cost for the consumers. However, apart from required resources, publishing (sharing) valuable (and often copyrighted) content has serious legal implications for users who publish the material (or publishers). This raises a question that whether (at least major) content publishers behave in an altruistic fashion or have other incentives such as financial. In this study, we identify the content publishers of more than 55K torrents in two major BitTorrent portals and examine their behavior. We demonstrate that a small fraction of publishers is responsible for 67% of the published content and 75% of the downloads. Our investigations reveal that these major publishers respond to two different profiles. On the one hand, antipiracy agencies and malicious publishers publish a large amount of fake files to protect copyrighted content and spread malware respectively. On the other hand, content publishing in BitTorrent is largely driven by companies with financial incentives. Therefore, if these companies lose their interest or are unable to publish content, BitTorrent traffic/portals may disappear or at least their associated traffic will be significantly reduced.","PeriodicalId":20688,"journal":{"name":"Proceedings of The 6th International Conference on Innovation in Science and Technology","volume":"98 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2010-07-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80719560","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In the current Internet, there is no clean way for affected parties to react to poor forwarding performance: to detect and assess Service Level Agreement (SLA) violations by a contractual partner, a domain must resort to ad-hoc monitoring using probes. Instead, we propose Network Confessional, a new, systematic approach to the problem of forwarding-performance verification. Our system relies on voluntary reporting, allowing each network domain to disclose its loss and delay performance to its customers and peers and, potentially, a regulator. Most importantly, it enables verifiable performance measurements, i.e., domains cannot abuse it to significantly exaggerate their performance. Finally, our system is tunable, allowing each participating domain to determine how many resources to devote to it independently (i.e., without any inter-domain coordination), exposing a controllable trade-off between performance-verification quality and resource consumption. Our system comes at the cost of deploying modest functionality at the participating domains' border routers; we show that it requires reasonable resources, well within modern network capabilities.
{"title":"Verifiable network-performance measurements","authors":"K. Argyraki, Petros Maniatis, Ankit Singla","doi":"10.1145/1921168.1921170","DOIUrl":"https://doi.org/10.1145/1921168.1921170","url":null,"abstract":"In the current Internet, there is no clean way for affected parties to react to poor forwarding performance: to detect and assess Service Level Agreement (SLA) violations by a contractual partner, a domain must resort to ad-hoc monitoring using probes. Instead, we propose Network Confessional, a new, systematic approach to the problem of forwarding-performance verification. Our system relies on voluntary reporting, allowing each network domain to disclose its loss and delay performance to its customers and peers and, potentially, a regulator. Most importantly, it enables verifiable performance measurements, i.e., domains cannot abuse it to significantly exaggerate their performance. Finally, our system is tunable, allowing each participating domain to determine how many resources to devote to it independently (i.e., without any inter-domain coordination), exposing a controllable trade-off between performance-verification quality and resource consumption. Our system comes at the cost of deploying modest functionality at the participating domains' border routers; we show that it requires reasonable resources, well within modern network capabilities.","PeriodicalId":20688,"journal":{"name":"Proceedings of The 6th International Conference on Innovation in Science and Technology","volume":"42 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2010-05-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81781815","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}