Pub Date : 2019-11-01DOI: 10.1109/CloudNet47604.2019.9064130
Yukikazu Hirano, Fujun He, Takehiro Sato, E. Oki
This paper proposes a network design model to minimize the worst-case network congestion against multiple link failures, where link weights of open shortest path first (OSPF) link weights are determined at the beginning of network operation. In the proposed model, which is called a preventive start-time optimization model with multiple-link failure (PSO-M), the number of multiple link failure patterns to support is restricted by introducing a probabilistic constraint called probabilistic guarantee. Under the condition that the total probability of non-connected failure patterns does not exceed a specified probability, PSO-M supports only connected failure patterns to determine the link weights. Numerical results show the effectiveness of proposed model.
{"title":"Preventive Start-time Optimization to Determine Link Weights against Multiple Link Failures","authors":"Yukikazu Hirano, Fujun He, Takehiro Sato, E. Oki","doi":"10.1109/CloudNet47604.2019.9064130","DOIUrl":"https://doi.org/10.1109/CloudNet47604.2019.9064130","url":null,"abstract":"This paper proposes a network design model to minimize the worst-case network congestion against multiple link failures, where link weights of open shortest path first (OSPF) link weights are determined at the beginning of network operation. In the proposed model, which is called a preventive start-time optimization model with multiple-link failure (PSO-M), the number of multiple link failure patterns to support is restricted by introducing a probabilistic constraint called probabilistic guarantee. Under the condition that the total probability of non-connected failure patterns does not exceed a specified probability, PSO-M supports only connected failure patterns to determine the link weights. Numerical results show the effectiveness of proposed model.","PeriodicalId":340890,"journal":{"name":"2019 IEEE 8th International Conference on Cloud Networking (CloudNet)","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114181347","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-11-01DOI: 10.1109/CloudNet47604.2019.9064109
Dávid Haja, Bálazs Vass, László Toka
The amount of data collected in various IT systems has grown exponentially in the recent years. So the challenge rises how we can process those huge datasets with the fulfillment of strict time criteria and of effective resource consumption, usually posed by the service consumers. This problem is not yet resolved with the appearance of edge computing as wide-area networking and all its well-known issues come into play and affect the performance of the applications scheduled in a hybrid edge-cloud infrastructure. In this paper, we present the steps we made towards network-aware big data task scheduling over such distributed systems. We propose different resource orchestration algorithms for two potential challenges we identify related to network resources of a geographically distributed topology: decreasing end-to-end latency and effectively allocating network bandwidth. The heuristic algorithms we propose provide better big data application performance compared to the default methods. We implement our solutions in our simulation environment and show the improved quality of big data applications.
{"title":"Towards making big data applications network-aware in edge-cloud systems","authors":"Dávid Haja, Bálazs Vass, László Toka","doi":"10.1109/CloudNet47604.2019.9064109","DOIUrl":"https://doi.org/10.1109/CloudNet47604.2019.9064109","url":null,"abstract":"The amount of data collected in various IT systems has grown exponentially in the recent years. So the challenge rises how we can process those huge datasets with the fulfillment of strict time criteria and of effective resource consumption, usually posed by the service consumers. This problem is not yet resolved with the appearance of edge computing as wide-area networking and all its well-known issues come into play and affect the performance of the applications scheduled in a hybrid edge-cloud infrastructure. In this paper, we present the steps we made towards network-aware big data task scheduling over such distributed systems. We propose different resource orchestration algorithms for two potential challenges we identify related to network resources of a geographically distributed topology: decreasing end-to-end latency and effectively allocating network bandwidth. The heuristic algorithms we propose provide better big data application performance compared to the default methods. We implement our solutions in our simulation environment and show the improved quality of big data applications.","PeriodicalId":340890,"journal":{"name":"2019 IEEE 8th International Conference on Cloud Networking (CloudNet)","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132977346","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-11-01DOI: 10.1109/CloudNet47604.2019.9064112
Balázs Sonkoly, B. Nagy, János Dóka, István Pelle, G. Szabó, Sándor Rácz, János Czentye, László Toka
The flexibility of future production systems envisioned by Industry 4.0 requires safe but efficient Human-Robot Collaboration (HRC). An important enabler of HRC is a sophisticated collision avoidance mechanism which can detect objects and potential collision events and as a response, it calculates detour trajectories avoiding physical contacts. Digital twins provide a novel way to test the impact of different control decisions in a simulated virtual environment even in parallel. The required computational power can be provided by cloud platforms but at the cost of higher delay and jitter. Moreover, clouds bring a versatile set of novel techniques easing the life of both developers and operators. Can digital twins exploit the benefits of these concepts? Can the robots tolerate the delay characteristics coming with the cloud platforms? In this paper, we answer these questions by building on public and private cloud solutions providing different techniques for parallel computation. Our contribution is threefold. First, we introduce a measurement methodology to characterize different approaches in terms of latency. Second, a real HRC use-case is elaborated and a relevant KPI is defined. Third, we evaluate the pros/cons of different solutions and their impact on the performance.
{"title":"Cloud-Powered Digital Twins: Is It Reality?","authors":"Balázs Sonkoly, B. Nagy, János Dóka, István Pelle, G. Szabó, Sándor Rácz, János Czentye, László Toka","doi":"10.1109/CloudNet47604.2019.9064112","DOIUrl":"https://doi.org/10.1109/CloudNet47604.2019.9064112","url":null,"abstract":"The flexibility of future production systems envisioned by Industry 4.0 requires safe but efficient Human-Robot Collaboration (HRC). An important enabler of HRC is a sophisticated collision avoidance mechanism which can detect objects and potential collision events and as a response, it calculates detour trajectories avoiding physical contacts. Digital twins provide a novel way to test the impact of different control decisions in a simulated virtual environment even in parallel. The required computational power can be provided by cloud platforms but at the cost of higher delay and jitter. Moreover, clouds bring a versatile set of novel techniques easing the life of both developers and operators. Can digital twins exploit the benefits of these concepts? Can the robots tolerate the delay characteristics coming with the cloud platforms? In this paper, we answer these questions by building on public and private cloud solutions providing different techniques for parallel computation. Our contribution is threefold. First, we introduce a measurement methodology to characterize different approaches in terms of latency. Second, a real HRC use-case is elaborated and a relevant KPI is defined. Third, we evaluate the pros/cons of different solutions and their impact on the performance.","PeriodicalId":340890,"journal":{"name":"2019 IEEE 8th International Conference on Cloud Networking (CloudNet)","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132395630","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-11-01DOI: 10.1109/CloudNet47604.2019.9064121
László Toka, Barnabas Gema, Balázs Sonkoly
Cloud computing has been one of the revolutionary breakthroughs of this decade in the ICT world and its popularity is soaring more than ever. More and more data centers are being deployed in order to accommodate the physical resources needed by cloud systems. As an important side effect the global energy demand of data centers are also on the rise. In the meantime the advancement in virtualization technologies has made migrating virtual machines from one host to another without shutting them down possible. Therefore the optimization of data center operations through the dynamic placement of virtual machines became a reality. This paper formalizes the well-studied cloud scheduling problem in a matching theoretical model in which the virtual machine to physical server mapping is translated into a stable matching problem. We build on an advanced algorithm from the matching theory domain in order to find the most accommodating scheduling arrangement. Hindered by the complexity of the algorithm, we evaluate various heuristics in numerical simulations of cloud environments. After the verification of the selected heuristic algorithm, we present the implementation of the proposed method as a custom compute scheduler for OpenStack.
{"title":"A stable matching method for cloud scheduling","authors":"László Toka, Barnabas Gema, Balázs Sonkoly","doi":"10.1109/CloudNet47604.2019.9064121","DOIUrl":"https://doi.org/10.1109/CloudNet47604.2019.9064121","url":null,"abstract":"Cloud computing has been one of the revolutionary breakthroughs of this decade in the ICT world and its popularity is soaring more than ever. More and more data centers are being deployed in order to accommodate the physical resources needed by cloud systems. As an important side effect the global energy demand of data centers are also on the rise. In the meantime the advancement in virtualization technologies has made migrating virtual machines from one host to another without shutting them down possible. Therefore the optimization of data center operations through the dynamic placement of virtual machines became a reality. This paper formalizes the well-studied cloud scheduling problem in a matching theoretical model in which the virtual machine to physical server mapping is translated into a stable matching problem. We build on an advanced algorithm from the matching theory domain in order to find the most accommodating scheduling arrangement. Hindered by the complexity of the algorithm, we evaluate various heuristics in numerical simulations of cloud environments. After the verification of the selected heuristic algorithm, we present the implementation of the proposed method as a custom compute scheduler for OpenStack.","PeriodicalId":340890,"journal":{"name":"2019 IEEE 8th International Conference on Cloud Networking (CloudNet)","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114268125","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-11-01DOI: 10.1109/CloudNet47604.2019.9064072
Flávio Meneses, M. Fernandes, Daniel Corujo, R. Aguiar
This paper proposes a slice management and orchestration framework for abstracting the instantiation of end-to-end network slices, which are composed by a chain of both physical and virtual network functions. In this line, the proposed SliMANO framework is a plug-in based system that requests network resources and coordinates the interaction among network orchestration entities for its instantiation and chaining in order to perform an end-to-end slice. These entities could range from management and orchestration (MANO), Software Defined Networking (SDN) controllers and Radio Access Network (RAN) controllers. A proof-of-concept prototype was implemented and experimentally evaluated, with results showcasing its feasibility. The results revealed a increase in the delay, associated with instantiation and deletion operations, when compared with the recently introduced network slicing feature (NetSlice) of the Open-source Management and Orchestration (OSM). Results showed that the delay is mostly associated to SliMANO being an entity external to the orchestrator itself, which comes as a trade-off for its added inter-operation capabilities. Moreover, SliMANO goes beyond the MANO domain and actually allows the interaction with SDN and RAN controllers.
{"title":"SliMANO: An Expandable Framework for the Management and Orchestration of End-to-end Network Slices","authors":"Flávio Meneses, M. Fernandes, Daniel Corujo, R. Aguiar","doi":"10.1109/CloudNet47604.2019.9064072","DOIUrl":"https://doi.org/10.1109/CloudNet47604.2019.9064072","url":null,"abstract":"This paper proposes a slice management and orchestration framework for abstracting the instantiation of end-to-end network slices, which are composed by a chain of both physical and virtual network functions. In this line, the proposed SliMANO framework is a plug-in based system that requests network resources and coordinates the interaction among network orchestration entities for its instantiation and chaining in order to perform an end-to-end slice. These entities could range from management and orchestration (MANO), Software Defined Networking (SDN) controllers and Radio Access Network (RAN) controllers. A proof-of-concept prototype was implemented and experimentally evaluated, with results showcasing its feasibility. The results revealed a increase in the delay, associated with instantiation and deletion operations, when compared with the recently introduced network slicing feature (NetSlice) of the Open-source Management and Orchestration (OSM). Results showed that the delay is mostly associated to SliMANO being an entity external to the orchestrator itself, which comes as a trade-off for its added inter-operation capabilities. Moreover, SliMANO goes beyond the MANO domain and actually allows the interaction with SDN and RAN controllers.","PeriodicalId":340890,"journal":{"name":"2019 IEEE 8th International Conference on Cloud Networking (CloudNet)","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121274979","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-11-01DOI: 10.1109/CloudNet47604.2019.9064141
Fujun He, Takehiro Sato, E. Oki
This paper proposes a survivable virtual network embedding model over elastic optical network with considering the shared protection against any single substrate node or link failure. We consider the backup computing and bandwidth resource sharing to reduce the required backup resources. A heuristic algorithm with polynomial time complexity is presented to solve the problem with considering promoting the backup resource sharing. The results observe that the rejection ratio is reduced about 60% in average by introducing the shared protection compared to the dedicated protection in our examined scenarios.
{"title":"Survivable Virtual Network Embedding Model with Shared Protection over Elastic Optical Network","authors":"Fujun He, Takehiro Sato, E. Oki","doi":"10.1109/CloudNet47604.2019.9064141","DOIUrl":"https://doi.org/10.1109/CloudNet47604.2019.9064141","url":null,"abstract":"This paper proposes a survivable virtual network embedding model over elastic optical network with considering the shared protection against any single substrate node or link failure. We consider the backup computing and bandwidth resource sharing to reduce the required backup resources. A heuristic algorithm with polynomial time complexity is presented to solve the problem with considering promoting the backup resource sharing. The results observe that the rejection ratio is reduced about 60% in average by introducing the shared protection compared to the dedicated protection in our examined scenarios.","PeriodicalId":340890,"journal":{"name":"2019 IEEE 8th International Conference on Cloud Networking (CloudNet)","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134313449","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-11-01DOI: 10.1109/CloudNet47604.2019.9064038
E. Haber, H. Alameddine, C. Assi, S. Sharafeddine
Multi-access Edge Computing (MEC) has enabled low-latency computation offloading for provisioning latency-sensitive 5G services that may also require stringent reliability. Given the growing user demands incurring communication bottleneck in the access network, Unmanned Aerial Vehicles (UAVs) have been proposed to provide edge computation capability, through mounting them by cloudlets, hence, harnessing their various advantages such as flexibility, low-cost, and line of sight communication. However, the introduction of UAV-mounted cloudlets necessitates a novel study of the provisioned reliability while accounting for the high failure rate of UAV-mounted cloudlets, that can be caused by various factors. In this paper, we study the problem of reliability-aware computation offloading in a UAV-enabled MEC system. We aim at maximizing the number of served offloading requests, by optimizing the UAVs' positions, users' task partitioning and assignment, as well as the allocation of radio and computational resources. We formulate the problem as a non-convex mixed-integer program, and due to its complexity, we transform it into an approximate convex program and provide a low-complexity iterative algorithm based on the Successive Convex Approximation (SCA) method. Through numerical analysis, we demonstrate the efficiency of our solution, and study the achieved performance gains for various latency and reliability requirements corresponding to different use cases in 5G networks.
{"title":"A Reliability-aware Computation Offloading Solution via UAV-mounted Cloudlets","authors":"E. Haber, H. Alameddine, C. Assi, S. Sharafeddine","doi":"10.1109/CloudNet47604.2019.9064038","DOIUrl":"https://doi.org/10.1109/CloudNet47604.2019.9064038","url":null,"abstract":"Multi-access Edge Computing (MEC) has enabled low-latency computation offloading for provisioning latency-sensitive 5G services that may also require stringent reliability. Given the growing user demands incurring communication bottleneck in the access network, Unmanned Aerial Vehicles (UAVs) have been proposed to provide edge computation capability, through mounting them by cloudlets, hence, harnessing their various advantages such as flexibility, low-cost, and line of sight communication. However, the introduction of UAV-mounted cloudlets necessitates a novel study of the provisioned reliability while accounting for the high failure rate of UAV-mounted cloudlets, that can be caused by various factors. In this paper, we study the problem of reliability-aware computation offloading in a UAV-enabled MEC system. We aim at maximizing the number of served offloading requests, by optimizing the UAVs' positions, users' task partitioning and assignment, as well as the allocation of radio and computational resources. We formulate the problem as a non-convex mixed-integer program, and due to its complexity, we transform it into an approximate convex program and provide a low-complexity iterative algorithm based on the Successive Convex Approximation (SCA) method. Through numerical analysis, we demonstrate the efficiency of our solution, and study the achieved performance gains for various latency and reliability requirements corresponding to different use cases in 5G networks.","PeriodicalId":340890,"journal":{"name":"2019 IEEE 8th International Conference on Cloud Networking (CloudNet)","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134521252","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-11-01DOI: 10.1109/CloudNet47604.2019.9064129
G. Lena, Andrea Tomassilli, D. Saucez, F. Giroire, T. Turletti, C. Lac
Networks have become complex systems that combine various concepts, techniques, and technologies. As a consequence, modelling or simulating them is now extremely complicated and researchers massively resort to prototyping techniques. Among other tools, Mininet is the most popular when it comes to evaluate SDN propositions. It allows to emulate SDN networks on a single computer. However, under certain circumstances experiments (e.g., resource intensive ones) may overload the host running Mininet. To tackle this issue, we propose Distrinet, a way to distribute Mininet over multiple hosts. Distrinet uses the same API than Mininet, meaning that it is compatible with Mininet programs. Distrinet is generic and can deploy experiments in Linux clusters or in the Amazon EC2 cloud. Thanks to optimization techniques, Distrinet minimizes the number of hosts required to perform an experiment given the capabilities of the hosting infrastructure, meaning that the experiment is run in a single host (as Mininet) if possible. Otherwise, it is automatically deployed on a platform using a minimum amount of resources in a Linux cluster or with a minimum cost in Amazon EC2.
{"title":"Mininet on steroids: exploiting the cloud for Mininet performance","authors":"G. Lena, Andrea Tomassilli, D. Saucez, F. Giroire, T. Turletti, C. Lac","doi":"10.1109/CloudNet47604.2019.9064129","DOIUrl":"https://doi.org/10.1109/CloudNet47604.2019.9064129","url":null,"abstract":"Networks have become complex systems that combine various concepts, techniques, and technologies. As a consequence, modelling or simulating them is now extremely complicated and researchers massively resort to prototyping techniques. Among other tools, Mininet is the most popular when it comes to evaluate SDN propositions. It allows to emulate SDN networks on a single computer. However, under certain circumstances experiments (e.g., resource intensive ones) may overload the host running Mininet. To tackle this issue, we propose Distrinet, a way to distribute Mininet over multiple hosts. Distrinet uses the same API than Mininet, meaning that it is compatible with Mininet programs. Distrinet is generic and can deploy experiments in Linux clusters or in the Amazon EC2 cloud. Thanks to optimization techniques, Distrinet minimizes the number of hosts required to perform an experiment given the capabilities of the hosting infrastructure, meaning that the experiment is run in a single host (as Mininet) if possible. Otherwise, it is automatically deployed on a platform using a minimum amount of resources in a Linux cluster or with a minimum cost in Amazon EC2.","PeriodicalId":340890,"journal":{"name":"2019 IEEE 8th International Conference on Cloud Networking (CloudNet)","volume":"41 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124272966","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-11-01DOI: 10.1109/CloudNet47604.2019.9064119
Nikos Kostopoulos, A. Pavlidis, Marinos Dimolianis, D. Kalogeras, B. Maglaris
This paper presents a privacy-preserving schema between Authoritative and Recursive DNS Servers for the efficient detection and collaborative mitigation of DNS Water Torture attacks in cloud environments. Monitoring data are harvested from the victim premises (Authoritative DNS Server and Data Center switches) to detect anomalies with DNS requester IPs classified as legitimate or suspicious. Subsequently, requests are forwarded or redirected for refined inspection to a filtering mechanism. Mitigation may be offered as a service either on-premises or via cloud scrubbing infrastructures. The proposed schema leverages on probabilistic data structures (Bloom Filters, Count-Min Sketches) and related algorithms (SymSpell) to meet time, space and privacy constraints required by cloud services. Notably, Bloom Filters are employed to map Resource Records of large DNS zones in a memory efficient manner; rapid name lookups are possible with zero false negatives and tolerable false positives. Our approach is tested via a proof of concept setup based on traces generated from publicly available DNS traffic datasets.
{"title":"A Privacy-Preserving Schema for the Detection and Collaborative Mitigation of DNS Water Torture Attacks in Cloud Infrastructures","authors":"Nikos Kostopoulos, A. Pavlidis, Marinos Dimolianis, D. Kalogeras, B. Maglaris","doi":"10.1109/CloudNet47604.2019.9064119","DOIUrl":"https://doi.org/10.1109/CloudNet47604.2019.9064119","url":null,"abstract":"This paper presents a privacy-preserving schema between Authoritative and Recursive DNS Servers for the efficient detection and collaborative mitigation of DNS Water Torture attacks in cloud environments. Monitoring data are harvested from the victim premises (Authoritative DNS Server and Data Center switches) to detect anomalies with DNS requester IPs classified as legitimate or suspicious. Subsequently, requests are forwarded or redirected for refined inspection to a filtering mechanism. Mitigation may be offered as a service either on-premises or via cloud scrubbing infrastructures. The proposed schema leverages on probabilistic data structures (Bloom Filters, Count-Min Sketches) and related algorithms (SymSpell) to meet time, space and privacy constraints required by cloud services. Notably, Bloom Filters are employed to map Resource Records of large DNS zones in a memory efficient manner; rapid name lookups are possible with zero false negatives and tolerable false positives. Our approach is tested via a proof of concept setup based on traces generated from publicly available DNS traffic datasets.","PeriodicalId":340890,"journal":{"name":"2019 IEEE 8th International Conference on Cloud Networking (CloudNet)","volume":"49 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125577177","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}