Pub Date : 2019-10-01DOI: 10.23919/CNSM46954.2019.9012672
Othmane Belmoukadam, Muhammad Jawad Khokhar, C. Barakat
Screen resolution along with network conditions are main objective factors impacting the user experience, in particular for video streaming applications. Terminals on their side feature more and more advanced characteristics resulting in different network requirements for good visual experience [1]. Previous studies tried to link MOS (Mean Opinion Score) to video bit rate for different screen types (e.g., CIF, QCIF, and HD) [2]. We leverage such studies and formulate a QoE-driven resource allocation problem to pinpoint the optimal bandwidth allocation that maximizes the QoE (Quality of Experience) over all users of a provider located behind the same bottleneck link, while accounting for the characteristics of the screens they use for video playout. For our optimization problem, QoE functions are built using curve fitting on data sets capturing the relationship between MOS, screen characteristics, and bandwidth requirements. We propose a simple heuristic based on Lagrangian relaxation and KKT (Karush Kuhn Tucker) conditions for a subset of constraints. Numerical simulations show that the proposed heuristic is able to increase overall QoE up to 20% compared to an allocation with TCP look-alike strategies implementing max-min fairness. Later, we use a MPEG/DASH implementation in the context of ns-3 and show that coupling our approach with a rate adaptation algorithm (e.g., [3]) can help increasing QoE while reducing both resolution switches and number of interruptions.
{"title":"On Accounting for Screen Resolution in Adaptive Video Streaming: A QoE-Driven Bandwidth Sharing Framework","authors":"Othmane Belmoukadam, Muhammad Jawad Khokhar, C. Barakat","doi":"10.23919/CNSM46954.2019.9012672","DOIUrl":"https://doi.org/10.23919/CNSM46954.2019.9012672","url":null,"abstract":"Screen resolution along with network conditions are main objective factors impacting the user experience, in particular for video streaming applications. Terminals on their side feature more and more advanced characteristics resulting in different network requirements for good visual experience [1]. Previous studies tried to link MOS (Mean Opinion Score) to video bit rate for different screen types (e.g., CIF, QCIF, and HD) [2]. We leverage such studies and formulate a QoE-driven resource allocation problem to pinpoint the optimal bandwidth allocation that maximizes the QoE (Quality of Experience) over all users of a provider located behind the same bottleneck link, while accounting for the characteristics of the screens they use for video playout. For our optimization problem, QoE functions are built using curve fitting on data sets capturing the relationship between MOS, screen characteristics, and bandwidth requirements. We propose a simple heuristic based on Lagrangian relaxation and KKT (Karush Kuhn Tucker) conditions for a subset of constraints. Numerical simulations show that the proposed heuristic is able to increase overall QoE up to 20% compared to an allocation with TCP look-alike strategies implementing max-min fairness. Later, we use a MPEG/DASH implementation in the context of ns-3 and show that coupling our approach with a rate adaptation algorithm (e.g., [3]) can help increasing QoE while reducing both resolution switches and number of interruptions.","PeriodicalId":273818,"journal":{"name":"2019 15th International Conference on Network and Service Management (CNSM)","volume":"121 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129216474","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-10-01DOI: 10.23919/CNSM46954.2019.9012683
K. Batbayar, Emmanouil Dimogerontakis, Roc Meseguer, L. Navarro, R. Sadre
Modern large-scale networked services, such as video streaming, are typically deployed at multiple locations in the network to provide redundancy and load balancing. Different techniques are used to provide performance monitoring information so that client nodes can select the best service instance. One of them is collaborative sensing, where clients share measurement results on the observed service performance to build a common ground of knowledge with low overhead. Clients can then use this common ground to select the most suitable service provider. However, collaborative algorithms are susceptible to false measurements sent by malfunctioning or malicious nodes, which decreases the accuracy of the performance sensing process. We propose Sense-Share, a simple light-weight and resilient collaborative sensing framework based on the similarity of the client nodes’ perception of service performance. Our experimental evaluation in different topologies shows that service performance sensing using Sense-Share achieves, on average, 94% similarity to non-collaborative brute force performance sensing, tolerating faulty nodes. Furthermore, our approach effectively distributes the service monitoring requests over the service nodes and exploits direct inter-node communication to share measurements, resulting in reduced monitoring overhead.
{"title":"Sense-Share: A Framework for Resilient Collaborative Service Performance Monitoring","authors":"K. Batbayar, Emmanouil Dimogerontakis, Roc Meseguer, L. Navarro, R. Sadre","doi":"10.23919/CNSM46954.2019.9012683","DOIUrl":"https://doi.org/10.23919/CNSM46954.2019.9012683","url":null,"abstract":"Modern large-scale networked services, such as video streaming, are typically deployed at multiple locations in the network to provide redundancy and load balancing. Different techniques are used to provide performance monitoring information so that client nodes can select the best service instance. One of them is collaborative sensing, where clients share measurement results on the observed service performance to build a common ground of knowledge with low overhead. Clients can then use this common ground to select the most suitable service provider. However, collaborative algorithms are susceptible to false measurements sent by malfunctioning or malicious nodes, which decreases the accuracy of the performance sensing process. We propose Sense-Share, a simple light-weight and resilient collaborative sensing framework based on the similarity of the client nodes’ perception of service performance. Our experimental evaluation in different topologies shows that service performance sensing using Sense-Share achieves, on average, 94% similarity to non-collaborative brute force performance sensing, tolerating faulty nodes. Furthermore, our approach effectively distributes the service monitoring requests over the service nodes and exploits direct inter-node communication to share measurements, resulting in reduced monitoring overhead.","PeriodicalId":273818,"journal":{"name":"2019 15th International Conference on Network and Service Management (CNSM)","volume":"93 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127679284","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-10-01DOI: 10.23919/CNSM46954.2019.9012678
J. Haxhibeqiri, I. Moerman, J. Hoebeke
Wireless netWorks are becoming more complex while applications on top are becoming more demanding. To maintain network performance in terms of latency, throughput and reliability, continuous verification of the performance, possibly followed by on-the-fly network (re)configuration is needed. To achieve this, the way wireless network monitoring is being done needs to be reconsidered and should evolve towards more timely, low overhead and fine-grained monitoring. This paper shows hoiv in-band network telemetry (INT) monitoring can achieve these objectives. An INT-enabled node architecture is designed as well as novel INT options. By means of an implementation on WiFi Linux devices, the concept is validated by tracking the behavior of a real network.
{"title":"Low Overhead, Fine-grained End-to-end Monitoring of Wireless Networks using In-band Telemetry","authors":"J. Haxhibeqiri, I. Moerman, J. Hoebeke","doi":"10.23919/CNSM46954.2019.9012678","DOIUrl":"https://doi.org/10.23919/CNSM46954.2019.9012678","url":null,"abstract":"Wireless netWorks are becoming more complex while applications on top are becoming more demanding. To maintain network performance in terms of latency, throughput and reliability, continuous verification of the performance, possibly followed by on-the-fly network (re)configuration is needed. To achieve this, the way wireless network monitoring is being done needs to be reconsidered and should evolve towards more timely, low overhead and fine-grained monitoring. This paper shows hoiv in-band network telemetry (INT) monitoring can achieve these objectives. An INT-enabled node architecture is designed as well as novel INT options. By means of an implementation on WiFi Linux devices, the concept is validated by tracking the behavior of a real network.","PeriodicalId":273818,"journal":{"name":"2019 15th International Conference on Network and Service Management (CNSM)","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127594936","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-10-01DOI: 10.23919/CNSM46954.2019.9012673
Tom Goethals, D. Kerkhove, B. Volckaert, F. Turck
For years, containers have been a popular choice for lightweight virtualization in the cloud. With the rise of more powerful and flexible edge devices, container deployment strategies have arisen that leverage the computational power of edge devices for optimal workload distribution. This move from a secure data center network to heterogenous public and private networks presents some issues in terms of security and network topology that can be partially solved by using a Virtual Private Network (VPN) to connect edge nodes to the cloud. In this paper, the scalability of VPN software is evaluated to determine if and how it can be used in large-scale clusters containing edge nodes. Benchmarks are performed to determine the maximum number of VPN-connected nodes and the influence of network degradation on VPN performance, primarily using traffic typical for edge devices generating IoT data. Some high level conclusions are drawn from the results, indicating that WireGuard is an excellent choice of VPN software to connect edge nodes in a cluster. Analysis of the results also shows the strengths and weaknesses of other VPN software.
{"title":"Scalability evaluation of VPN technologies for secure container networking","authors":"Tom Goethals, D. Kerkhove, B. Volckaert, F. Turck","doi":"10.23919/CNSM46954.2019.9012673","DOIUrl":"https://doi.org/10.23919/CNSM46954.2019.9012673","url":null,"abstract":"For years, containers have been a popular choice for lightweight virtualization in the cloud. With the rise of more powerful and flexible edge devices, container deployment strategies have arisen that leverage the computational power of edge devices for optimal workload distribution. This move from a secure data center network to heterogenous public and private networks presents some issues in terms of security and network topology that can be partially solved by using a Virtual Private Network (VPN) to connect edge nodes to the cloud. In this paper, the scalability of VPN software is evaluated to determine if and how it can be used in large-scale clusters containing edge nodes. Benchmarks are performed to determine the maximum number of VPN-connected nodes and the influence of network degradation on VPN performance, primarily using traffic typical for edge devices generating IoT data. Some high level conclusions are drawn from the results, indicating that WireGuard is an excellent choice of VPN software to connect edge nodes in a cluster. Analysis of the results also shows the strengths and weaknesses of other VPN software.","PeriodicalId":273818,"journal":{"name":"2019 15th International Conference on Network and Service Management (CNSM)","volume":"62 11","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132463498","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-10-01DOI: 10.23919/CNSM46954.2019.9012667
Joseph McNamara, Liam Fallon, Enda Fallon
Services such as interactive video and real time gaming are ubiquitous on modern networks. The approaching realisation of 5G as well as the virtualisation and scalability of network functions made possible by technologies such as NFV and Kubernetes pushes the frontiers of what applications can do and how they can be deployed. However, managing such intangible services is a real challenge for network management systems. Adaptive Policy is an approach that can be applied to govern such services in an intent-based manner.In this work, we are exploring if the manner in which such services are deployed, virtualized, and scaled can be guided using real time context aware decision making. We are investigating how to apply Adaptive Policy to the problem of optimizing interactive video streaming delivery in a virtualized environment. We utilise components of our previously established test bed framework and implement a single layer neural network through Adaptive Policy, in which weights assigned to network metrics are continuously adjusted through supervised test cycles, resulting in weights in proportion to their associated impact on our video stream quality. We present the initial test results from our Perceptron inspired policy-based approach to video quality optimisation through weighted network resource evaluation.
{"title":"A Hybrid Machine Learning/Policy Approach to Optimise Video Path Selection","authors":"Joseph McNamara, Liam Fallon, Enda Fallon","doi":"10.23919/CNSM46954.2019.9012667","DOIUrl":"https://doi.org/10.23919/CNSM46954.2019.9012667","url":null,"abstract":"Services such as interactive video and real time gaming are ubiquitous on modern networks. The approaching realisation of 5G as well as the virtualisation and scalability of network functions made possible by technologies such as NFV and Kubernetes pushes the frontiers of what applications can do and how they can be deployed. However, managing such intangible services is a real challenge for network management systems. Adaptive Policy is an approach that can be applied to govern such services in an intent-based manner.In this work, we are exploring if the manner in which such services are deployed, virtualized, and scaled can be guided using real time context aware decision making. We are investigating how to apply Adaptive Policy to the problem of optimizing interactive video streaming delivery in a virtualized environment. We utilise components of our previously established test bed framework and implement a single layer neural network through Adaptive Policy, in which weights assigned to network metrics are continuously adjusted through supervised test cycles, resulting in weights in proportion to their associated impact on our video stream quality. We present the initial test results from our Perceptron inspired policy-based approach to video quality optimisation through weighted network resource evaluation.","PeriodicalId":273818,"journal":{"name":"2019 15th International Conference on Network and Service Management (CNSM)","volume":"88 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134106683","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-10-01DOI: 10.23919/cnsm46954.2019.9012712
Pan Zhao, Xiaoyang Li, Lei Feng, Qinghui Zhang, Weidong Yang, Fei Zheng
To meet the immensely diverse service requirements, heterogeneous cloud radio access network (H-CRAN) architecture and D2D communication is embraced. Consequently, the resource allocation between D2D pairs and current users is a challenge. In this paper, a joint power control and sub-channel allocation scheme is proposed. The original mixed-integer nonlinear programming problem is decomposed into power and sub-channel allocation. Geometric Vertex Search approach and 3-dimensional (3-D) matching method are used to solve them. Finally, numerical results verify the proposed scheme has about 35% and 60% improvement in total throughput comparing with other approaches.
{"title":"3-D Matching-based Resource Allocation for D2D Communications in H-CRAN Network","authors":"Pan Zhao, Xiaoyang Li, Lei Feng, Qinghui Zhang, Weidong Yang, Fei Zheng","doi":"10.23919/cnsm46954.2019.9012712","DOIUrl":"https://doi.org/10.23919/cnsm46954.2019.9012712","url":null,"abstract":"To meet the immensely diverse service requirements, heterogeneous cloud radio access network (H-CRAN) architecture and D2D communication is embraced. Consequently, the resource allocation between D2D pairs and current users is a challenge. In this paper, a joint power control and sub-channel allocation scheme is proposed. The original mixed-integer nonlinear programming problem is decomposed into power and sub-channel allocation. Geometric Vertex Search approach and 3-dimensional (3-D) matching method are used to solve them. Finally, numerical results verify the proposed scheme has about 35% and 60% improvement in total throughput comparing with other approaches.","PeriodicalId":273818,"journal":{"name":"2019 15th International Conference on Network and Service Management (CNSM)","volume":"47 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133474824","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-10-01DOI: 10.23919/CNSM46954.2019.9012722
Mihir Joshi, Parmeet Singh, A. N. Zincir-Heywood
The aim of this work is to detect compromised users of tweets based on their writing styles. In this paper, we use Siamese Networks to learn a representation of user tweets that allows us to classify them based on a limited amount of ground truth data. We propose the employment of this classification model to identify compromised user accounts of tweets.
{"title":"Compromised Tweet Detection Using Siamese Networks and fastText Representations","authors":"Mihir Joshi, Parmeet Singh, A. N. Zincir-Heywood","doi":"10.23919/CNSM46954.2019.9012722","DOIUrl":"https://doi.org/10.23919/CNSM46954.2019.9012722","url":null,"abstract":"The aim of this work is to detect compromised users of tweets based on their writing styles. In this paper, we use Siamese Networks to learn a representation of user tweets that allows us to classify them based on a limited amount of ground truth data. We propose the employment of this classification model to identify compromised user accounts of tweets.","PeriodicalId":273818,"journal":{"name":"2019 15th International Conference on Network and Service Management (CNSM)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129358777","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-10-01DOI: 10.23919/CNSM46954.2019.9012738
Lionel Metongnon, R. Sadre, E. C. Ezin
The Internet of Things (IoT) is not one single entity, but a collection of different devices, communication technologies, protocols and services. IoT systems can span a large number of individually managed networks that are interconnected through the Internet and host the different components of an IoT application, such as sensor devices, storage servers and data processing services. Protecting such a complex multiparty system from abuse becomes a very challenging task. New difficulties arise everyday when policies are updated or new collaborations and federations appear between entities. Moreover, hacked IoT devices can also become the source of powerful attacks, as the Mirai malware has demonstrated, and therefore a danger for the other involved parties. In this paper, we propose an approach to improve the management and protection of collaborating IoT systems using distributed intrusion detection and permission-based access control. Our approach is based on interconnected middleboxes that monitor the communication between the various IoT networks and are able to stop incoming as well as outgoing attacks. We evaluate our approach through experiments with different types of attacks.
{"title":"Distributed Middlebox Architecture for IoT Protection","authors":"Lionel Metongnon, R. Sadre, E. C. Ezin","doi":"10.23919/CNSM46954.2019.9012738","DOIUrl":"https://doi.org/10.23919/CNSM46954.2019.9012738","url":null,"abstract":"The Internet of Things (IoT) is not one single entity, but a collection of different devices, communication technologies, protocols and services. IoT systems can span a large number of individually managed networks that are interconnected through the Internet and host the different components of an IoT application, such as sensor devices, storage servers and data processing services. Protecting such a complex multiparty system from abuse becomes a very challenging task. New difficulties arise everyday when policies are updated or new collaborations and federations appear between entities. Moreover, hacked IoT devices can also become the source of powerful attacks, as the Mirai malware has demonstrated, and therefore a danger for the other involved parties. In this paper, we propose an approach to improve the management and protection of collaborating IoT systems using distributed intrusion detection and permission-based access control. Our approach is based on interconnected middleboxes that monitor the communication between the various IoT networks and are able to stop incoming as well as outgoing attacks. We evaluate our approach through experiments with different types of attacks.","PeriodicalId":273818,"journal":{"name":"2019 15th International Conference on Network and Service Management (CNSM)","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127205500","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-10-01DOI: 10.23919/CNSM46954.2019.9012679
O. Oyebode, Rita Orji
Diabetes is a non-communicable disease associated with increased level of glucose due to inadequate supply of insulin (known as Type 1 diabetes) or inability to use insulin efficiently (known as Type 2 diabetes). Though the exact cause of Type 1 diabetes is unknown, the probable causes are genetics and environmental factors (such as exposure to viruses). On the other hand, Type 2 diabetes is largely linked to unhealthy lifestyle choices. In Nigeria, many people are believed to be living with diabetes and the country’s diabetes prevalence rate is one of the highest in Africa. To determine the factors responsible for diabetes prevalence in Nigeria, we analyzed social media contents related to diabetes since billions of people, including diabetic patients and healthcare professionals, use social media platforms to freely share their experiences and discuss many health-related topics. None of the existing research targets the African audience who are also major users of social media platforms; hence our work aims to close this gap by leveraging an African social media platform targeted at Nigerians to gather diabetes-related data, and then applying machine learning technique to detect those factors responsible for diabetes prevalence in Nigeria. Based on our results, we discussed positive behavioural or lifestyle changes that are necessary to prevent and treat diabetes in Nigeria, as well as intervention designs required to bring about those changes. Future work will develop a diabetes intervention application implementing all the design features highlighted in Section V of this paper and making it generally accessible to Nigerians.
{"title":"Detecting Factors Responsible for Diabetes Prevalence in Nigeria using Social Media and Machine Learning","authors":"O. Oyebode, Rita Orji","doi":"10.23919/CNSM46954.2019.9012679","DOIUrl":"https://doi.org/10.23919/CNSM46954.2019.9012679","url":null,"abstract":"Diabetes is a non-communicable disease associated with increased level of glucose due to inadequate supply of insulin (known as Type 1 diabetes) or inability to use insulin efficiently (known as Type 2 diabetes). Though the exact cause of Type 1 diabetes is unknown, the probable causes are genetics and environmental factors (such as exposure to viruses). On the other hand, Type 2 diabetes is largely linked to unhealthy lifestyle choices. In Nigeria, many people are believed to be living with diabetes and the country’s diabetes prevalence rate is one of the highest in Africa. To determine the factors responsible for diabetes prevalence in Nigeria, we analyzed social media contents related to diabetes since billions of people, including diabetic patients and healthcare professionals, use social media platforms to freely share their experiences and discuss many health-related topics. None of the existing research targets the African audience who are also major users of social media platforms; hence our work aims to close this gap by leveraging an African social media platform targeted at Nigerians to gather diabetes-related data, and then applying machine learning technique to detect those factors responsible for diabetes prevalence in Nigeria. Based on our results, we discussed positive behavioural or lifestyle changes that are necessary to prevent and treat diabetes in Nigeria, as well as intervention designs required to bring about those changes. Future work will develop a diabetes intervention application implementing all the design features highlighted in Section V of this paper and making it generally accessible to Nigerians.","PeriodicalId":273818,"journal":{"name":"2019 15th International Conference on Network and Service Management (CNSM)","volume":"350 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124314622","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-10-01DOI: 10.23919/CNSM46954.2019.9012669
A. Lazaris, V. Prasanna
The ability to generate network traffic predictions at short time scales is crucial for many network management tasks such as traffic engineering, anomaly detection, and traffic matrix estimation. However, building models that are able to predict the traffic from modern networks at short time scales is not a trivial task due to the diversity of the network traffic sources. In this paper, we present a framework for network-wide link-level traffic prediction using Long Short-Term Memory (LSTM) neural networks. Our proposed framework leverages link statistics that can be easily collected either by the controller of a Software Defined Network (SDN), or by SNMP measurements in a legacy network, in order to predict future link throughputs. We implement several variations of LSTMs and compare their performance with traditional baseline models. Our evaluation study using real network traces from a Tier-1 ISP illustrates that LSTMs can predict link throughputs with very high accuracy outperforming the baselines for various traffic aggregation levels and time scales.
{"title":"Deep Learning Models For Aggregated Network Traffic Prediction","authors":"A. Lazaris, V. Prasanna","doi":"10.23919/CNSM46954.2019.9012669","DOIUrl":"https://doi.org/10.23919/CNSM46954.2019.9012669","url":null,"abstract":"The ability to generate network traffic predictions at short time scales is crucial for many network management tasks such as traffic engineering, anomaly detection, and traffic matrix estimation. However, building models that are able to predict the traffic from modern networks at short time scales is not a trivial task due to the diversity of the network traffic sources. In this paper, we present a framework for network-wide link-level traffic prediction using Long Short-Term Memory (LSTM) neural networks. Our proposed framework leverages link statistics that can be easily collected either by the controller of a Software Defined Network (SDN), or by SNMP measurements in a legacy network, in order to predict future link throughputs. We implement several variations of LSTMs and compare their performance with traditional baseline models. Our evaluation study using real network traces from a Tier-1 ISP illustrates that LSTMs can predict link throughputs with very high accuracy outperforming the baselines for various traffic aggregation levels and time scales.","PeriodicalId":273818,"journal":{"name":"2019 15th International Conference on Network and Service Management (CNSM)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123746092","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}