Pub Date : 2018-06-01DOI: 10.23919/TMA.2018.8506499
Qasim Lone, M. Luckie, Maciej Korczyński, H. Asghari, M. Javed, M. V. Eeten
Internet measurement tools are used to make inferences about network policies and practices across the Internet, such as censorship, traffic manipulation, bandwidth, and security measures. Some tools must be run from vantage points within individual networks, so are dependent on volunteer recruitment. A small pool of volunteers limits the impact of these tools. Crowdsourcing marketplaces can potentially recruit workers to run tools from networks not covered by the volunteer pool. We design an infrastructure to collect and synchronize measurements from five crowdsourcing platforms, and use that infrastructure to collect data on network source address validation policies for CAIDA's Spoofer project. In six weeks we increased the coverage of Spoofer measurements by recruiting 1519 workers from within 91 countries and 784 unique ASes for 2,000 Euro; 342 of these ASes were not previously covered, and represent a 15% increase in ASes over the prior 12 months. We describe lessons learned in recruiting and renumerating workers; in particular, strategies to address worker behavior when workers are screened because of overlap in the volunteer pool.
{"title":"Using Crowdsourcing Marketplaces for Network Measurements: The Case of Spoofer","authors":"Qasim Lone, M. Luckie, Maciej Korczyński, H. Asghari, M. Javed, M. V. Eeten","doi":"10.23919/TMA.2018.8506499","DOIUrl":"https://doi.org/10.23919/TMA.2018.8506499","url":null,"abstract":"Internet measurement tools are used to make inferences about network policies and practices across the Internet, such as censorship, traffic manipulation, bandwidth, and security measures. Some tools must be run from vantage points within individual networks, so are dependent on volunteer recruitment. A small pool of volunteers limits the impact of these tools. Crowdsourcing marketplaces can potentially recruit workers to run tools from networks not covered by the volunteer pool. We design an infrastructure to collect and synchronize measurements from five crowdsourcing platforms, and use that infrastructure to collect data on network source address validation policies for CAIDA's Spoofer project. In six weeks we increased the coverage of Spoofer measurements by recruiting 1519 workers from within 91 countries and 784 unique ASes for 2,000 Euro; 342 of these ASes were not previously covered, and represent a 15% increase in ASes over the prior 12 months. We describe lessons learned in recruiting and renumerating workers; in particular, strategies to address worker behavior when workers are screened because of overlap in the volunteer pool.","PeriodicalId":6607,"journal":{"name":"2018 Network Traffic Measurement and Analysis Conference (TMA)","volume":"172 1","pages":"1-8"},"PeriodicalIF":0.0,"publicationDate":"2018-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84008328","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-06-01DOI: 10.23919/TMA.2018.8506558
Giuseppe Aceto, D. Ciuonzo, Antonio Montieri, A. Pescapé
The massive adoption of hand-held devices has led to the explosion of mobile traffic volumes traversing home and enterprise networks, as well as the Internet. Procedures for inferring (mobile) applications generating such traffic, known as Traffic Classification (TC), are the enabler for highly-valuable profiling information while certainly raise important privacy issues. The design of accurate classifiers is however exacerbated by the increasing adoption of encrypted protocols (such as TLS), hindering the applicability of highly-accurate approaches, such as deep packet inspection. Additionally, the (daily) expanding set of apps and the moving-target nature of mobile traffic makes design solutions with usual machine learning, based on manually-and expert-originated features, outdated. For these reasons, we suggest Deep Learning (DL) as a viable strategy to design traffic classifiers based on automatically-extracted features, reflecting the complex mobile-traffic patterns. To this end, different state-of-the-art DL techniques from TC are here reproduced, dissected, and set into a systematic framework for comparison, including also a performance evaluation workbench. Based on three datasets of real human users' activity, performance of these DL classifiers is critically investigated, highlighting pitfalls, design guidelines, and open issues of DL in mobile encrypted TC.
{"title":"Mobile Encrypted Traffic Classification Using Deep Learning","authors":"Giuseppe Aceto, D. Ciuonzo, Antonio Montieri, A. Pescapé","doi":"10.23919/TMA.2018.8506558","DOIUrl":"https://doi.org/10.23919/TMA.2018.8506558","url":null,"abstract":"The massive adoption of hand-held devices has led to the explosion of mobile traffic volumes traversing home and enterprise networks, as well as the Internet. Procedures for inferring (mobile) applications generating such traffic, known as Traffic Classification (TC), are the enabler for highly-valuable profiling information while certainly raise important privacy issues. The design of accurate classifiers is however exacerbated by the increasing adoption of encrypted protocols (such as TLS), hindering the applicability of highly-accurate approaches, such as deep packet inspection. Additionally, the (daily) expanding set of apps and the moving-target nature of mobile traffic makes design solutions with usual machine learning, based on manually-and expert-originated features, outdated. For these reasons, we suggest Deep Learning (DL) as a viable strategy to design traffic classifiers based on automatically-extracted features, reflecting the complex mobile-traffic patterns. To this end, different state-of-the-art DL techniques from TC are here reproduced, dissected, and set into a systematic framework for comparison, including also a performance evaluation workbench. Based on three datasets of real human users' activity, performance of these DL classifiers is critically investigated, highlighting pitfalls, design guidelines, and open issues of DL in mobile encrypted TC.","PeriodicalId":6607,"journal":{"name":"2018 Network Traffic Measurement and Analysis Conference (TMA)","volume":"108 1","pages":"1-8"},"PeriodicalIF":0.0,"publicationDate":"2018-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85265139","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-06-01DOI: 10.23919/TMA.2018.8506581
Isabel Amigo, G. Sena, Marwa Chami, P. Belzarena
We propose an Overlay network architecture for reliable and QoS-aware interconnection between its nodes, without handling Internet routers and without tunneling overhead. The architecture is based on the SDN paradigm. We demonstrate the feasibility and challenges of such a system using mininet and pox controller.
{"title":"An SDN-Based Approach for QoS and Reliability in Overlay Networks","authors":"Isabel Amigo, G. Sena, Marwa Chami, P. Belzarena","doi":"10.23919/TMA.2018.8506581","DOIUrl":"https://doi.org/10.23919/TMA.2018.8506581","url":null,"abstract":"We propose an Overlay network architecture for reliable and QoS-aware interconnection between its nodes, without handling Internet routers and without tunneling overhead. The architecture is based on the SDN paradigm. We demonstrate the feasibility and challenges of such a system using mininet and pox controller.","PeriodicalId":6607,"journal":{"name":"2018 Network Traffic Measurement and Analysis Conference (TMA)","volume":"67 2 1","pages":"1-2"},"PeriodicalIF":0.0,"publicationDate":"2018-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85478304","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-04-24DOI: 10.23919/TMA.2018.8506538
A. Custura, G. Fairhurst, Iain R. Learmonth
To optimise their transmission, Internet endpoints need to know the largest size of packet they can send across a specific Internet path, the Path Maximum Transmission Unit (PMTU). This paper explores the PMTU size experienced across the Internet core, wired and mobile edge networks. Our results show that MSS Clamping has been widely deployed in edge networks, and some webservers artificially reduce their advertised MSS, both of which we expect help avoid PMTUD failure for TCP. The maximum packet size used by a TCP connection is also constrained by the acMSS. MSS Clamping was observed in over 20% of edge networks tested. We find a significant proportion of webservers that advertise a low MSS can still be reached with a 1500 byte packet. We also find more than half of IPv6 webservers do not attempt PMTUD and clamp the MSS to 1280 bytes. Furthermore, we see evidence of black-hole detection mechanisms implemented by over a quarter of IPv6 webservers and almost 15% of IPv4 webservers. We also consider the implications for UDP - which necessarily can not utilise MSS Clamping. The paper provides useful input to the design of a robust PMTUD method that can be appropriate for the growing volume of UDP-based applications, by determining ICMP quotations can be used as to verify sender authenticity.
{"title":"Exploring Usable Path MTU in the Internet","authors":"A. Custura, G. Fairhurst, Iain R. Learmonth","doi":"10.23919/TMA.2018.8506538","DOIUrl":"https://doi.org/10.23919/TMA.2018.8506538","url":null,"abstract":"To optimise their transmission, Internet endpoints need to know the largest size of packet they can send across a specific Internet path, the Path Maximum Transmission Unit (PMTU). This paper explores the PMTU size experienced across the Internet core, wired and mobile edge networks. Our results show that MSS Clamping has been widely deployed in edge networks, and some webservers artificially reduce their advertised MSS, both of which we expect help avoid PMTUD failure for TCP. The maximum packet size used by a TCP connection is also constrained by the acMSS. MSS Clamping was observed in over 20% of edge networks tested. We find a significant proportion of webservers that advertise a low MSS can still be reached with a 1500 byte packet. We also find more than half of IPv6 webservers do not attempt PMTUD and clamp the MSS to 1280 bytes. Furthermore, we see evidence of black-hole detection mechanisms implemented by over a quarter of IPv6 webservers and almost 15% of IPv4 webservers. We also consider the implications for UDP - which necessarily can not utilise MSS Clamping. The paper provides useful input to the design of a robust PMTUD method that can be appropriate for the growing volume of UDP-based applications, by determining ICMP quotations can be used as to verify sender authenticity.","PeriodicalId":6607,"journal":{"name":"2018 Network Traffic Measurement and Analysis Conference (TMA)","volume":"15 1","pages":"1-8"},"PeriodicalIF":0.0,"publicationDate":"2018-04-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82752186","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}