Pub Date : 2018-06-01DOI: 10.1109/IWQoS.2018.8624187
Yu Guo, Cong Wang, Xingliang Yuan, X. Jia
Over the past few years, enterprises start adopting software middlebox services from cloud or NFV service providers. Although this new service model is recognized to be cost-effective and scalable for traffic processing, privacy concerns arise because of traffic redirection to outsourced middleboxes. To ease these concerns, recent efforts are made to design secure middlebox services that can directly function over encrypted traffic and middlebox rules. But prior designs only work for portions of frequently-used network functions. To push forward this area, in this work, we investigate header matching based functions like firewall filtering and packet classification. To enable privacy-preserving processing on encrypted packets, we start from the latest primitive “order-revealing encryption (ORE)” for encrypted range search. In particular, we devise a new practical ORE construction tailored for network functions. The advantages include: 1) guaranteed protection of packet headers and rule specified ranges; 2) reduced accessible information during comparisons; 3) rule-aware size reduction for ORE ciphertexts. We implement a fully functional system prototype and deploy it at Microsoft Azure Cloud. Evaluation results show that our system can achieve per packet matching latency 0.53 to 15.87 millisecond over 1.6K firewall rules.
{"title":"Enabling Privacy-Preserving Header Matching for Outsourced Middleboxes","authors":"Yu Guo, Cong Wang, Xingliang Yuan, X. Jia","doi":"10.1109/IWQoS.2018.8624187","DOIUrl":"https://doi.org/10.1109/IWQoS.2018.8624187","url":null,"abstract":"Over the past few years, enterprises start adopting software middlebox services from cloud or NFV service providers. Although this new service model is recognized to be cost-effective and scalable for traffic processing, privacy concerns arise because of traffic redirection to outsourced middleboxes. To ease these concerns, recent efforts are made to design secure middlebox services that can directly function over encrypted traffic and middlebox rules. But prior designs only work for portions of frequently-used network functions. To push forward this area, in this work, we investigate header matching based functions like firewall filtering and packet classification. To enable privacy-preserving processing on encrypted packets, we start from the latest primitive “order-revealing encryption (ORE)” for encrypted range search. In particular, we devise a new practical ORE construction tailored for network functions. The advantages include: 1) guaranteed protection of packet headers and rule specified ranges; 2) reduced accessible information during comparisons; 3) rule-aware size reduction for ORE ciphertexts. We implement a fully functional system prototype and deploy it at Microsoft Azure Cloud. Evaluation results show that our system can achieve per packet matching latency 0.53 to 15.87 millisecond over 1.6K firewall rules.","PeriodicalId":222290,"journal":{"name":"2018 IEEE/ACM 26th International Symposium on Quality of Service (IWQoS)","volume":"121 ","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"113999804","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-06-01DOI: 10.1109/IWQoS.2018.8624119
Lei Jiao, A. Tulino, J. Llorca, Yue Jin, A. Sala, Jun Li
We study cloud resource control in the global-local distributed cloud infrastructure. We firstly model and formulate the problem while capturing the multiple challenges such as the inter-dependency between resources and the uncertainty in the inputs. We then propose a novel online algorithm which, via the regularization technique, decouples the original problem into a series of subproblems for individual time slots and solves both the subproblems and the original problem over every prediction time window to jointly make resource allocation decisions. Compared against the offline optimum with accurate inputs, our approach maintains a provable parameterized worst-case performance gap with only inaccurate inputs under certain conditions. Finally, we conduct evaluations with large-scale, real-world data traces and show that our solution outperforms existing methods and works efficiently with near-optimal cost in practice.
{"title":"Online Control of Cloud and Edge Resources Using Inaccurate Predictions","authors":"Lei Jiao, A. Tulino, J. Llorca, Yue Jin, A. Sala, Jun Li","doi":"10.1109/IWQoS.2018.8624119","DOIUrl":"https://doi.org/10.1109/IWQoS.2018.8624119","url":null,"abstract":"We study cloud resource control in the global-local distributed cloud infrastructure. We firstly model and formulate the problem while capturing the multiple challenges such as the inter-dependency between resources and the uncertainty in the inputs. We then propose a novel online algorithm which, via the regularization technique, decouples the original problem into a series of subproblems for individual time slots and solves both the subproblems and the original problem over every prediction time window to jointly make resource allocation decisions. Compared against the offline optimum with accurate inputs, our approach maintains a provable parameterized worst-case performance gap with only inaccurate inputs under certain conditions. Finally, we conduct evaluations with large-scale, real-world data traces and show that our solution outperforms existing methods and works efficiently with near-optimal cost in practice.","PeriodicalId":222290,"journal":{"name":"2018 IEEE/ACM 26th International Symposium on Quality of Service (IWQoS)","volume":"66 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114481753","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-06-01DOI: 10.1109/IWQoS.2018.8624167
I-Chih Wang, Charles H.-P. Wen, H. J. Chao
The fifth generation (5G) mobile communication network aims at providing high-rate, low-latency services. When a user subscribes a chain of service functions (a.k.a. service chain) from the telecom providers, a Service Level Agreement (SLA) is specified according to his requirement. Deploying service chains optimally has always been a big issue. Several previous works have presented various strategies of service-chain deployment for optimizing either latency or computational resources; however, over-optimization of latency or computational resource is not necessarily equivalent to improvement on quality of experience. Therefore, in this paper, we formally formulate this problem of optimizing quality of experience with the queuing theory and mixed-integer linear programming. In addition, we propose an efficient algorithm named “QoE-driven Service-Chain Deployment with Latency Prediction” for deploying a service chain for a user in practice. According to the experiments, our algorithm reduces > 99% rejections and > 99% waiting time, notably elevating the quality of experience for users.
{"title":"Improving Quality of Experience of Service-Chain Deployment for Multiple Users","authors":"I-Chih Wang, Charles H.-P. Wen, H. J. Chao","doi":"10.1109/IWQoS.2018.8624167","DOIUrl":"https://doi.org/10.1109/IWQoS.2018.8624167","url":null,"abstract":"The fifth generation (5G) mobile communication network aims at providing high-rate, low-latency services. When a user subscribes a chain of service functions (a.k.a. service chain) from the telecom providers, a Service Level Agreement (SLA) is specified according to his requirement. Deploying service chains optimally has always been a big issue. Several previous works have presented various strategies of service-chain deployment for optimizing either latency or computational resources; however, over-optimization of latency or computational resource is not necessarily equivalent to improvement on quality of experience. Therefore, in this paper, we formally formulate this problem of optimizing quality of experience with the queuing theory and mixed-integer linear programming. In addition, we propose an efficient algorithm named “QoE-driven Service-Chain Deployment with Latency Prediction” for deploying a service chain for a user in practice. According to the experiments, our algorithm reduces > 99% rejections and > 99% waiting time, notably elevating the quality of experience for users.","PeriodicalId":222290,"journal":{"name":"2018 IEEE/ACM 26th International Symposium on Quality of Service (IWQoS)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129846230","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-06-01DOI: 10.1109/IWQoS.2018.8624184
B. Alinia, M. S. Talebi, M. Hajiesmaili, Ali Yekkehkhany, N. Crespi
This paper studies the classical problem of online scheduling of deadline-sensitive jobs with partial values and investigates its extension to Electric Vehicle (EV) charging scheduling by taking into account the processing rate limit of jobs and charging station capacity constraint. The problem lies in the category of time-coupled online scheduling problems without availability of future information. This paper proposes two online algorithms, both of which are shown to be $(2-frac{1}{U})$-competitive, where $U$ is the maximum scarcity level, a parameter that indicates demand-to-supply ratio. The first proposed algorithm is deterministic, whereas the second is randomized and enjoys a lower computational complexity. When $U$ grows large, the performance of both algorithms approaches that of the state-of-the-art for the case where there is processing rate limits on the jobs. Nonetheless in realistic cases, where $U$ is typically small, the proposed algorithms enjoy a much lower competitive ratio. To carry out the competitive analysis of our algorithms, we present a proof technique, which is novel to the best of our knowledge. This technique could also be used to simplify the competitive analysis of some existing algorithms, and thus could be of independent interest.
{"title":"Competitive Online Scheduling Algorithms with Applications in Deadline-Constrained EV Charging","authors":"B. Alinia, M. S. Talebi, M. Hajiesmaili, Ali Yekkehkhany, N. Crespi","doi":"10.1109/IWQoS.2018.8624184","DOIUrl":"https://doi.org/10.1109/IWQoS.2018.8624184","url":null,"abstract":"This paper studies the classical problem of online scheduling of deadline-sensitive jobs with partial values and investigates its extension to Electric Vehicle (EV) charging scheduling by taking into account the processing rate limit of jobs and charging station capacity constraint. The problem lies in the category of time-coupled online scheduling problems without availability of future information. This paper proposes two online algorithms, both of which are shown to be $(2-frac{1}{U})$-competitive, where $U$ is the maximum scarcity level, a parameter that indicates demand-to-supply ratio. The first proposed algorithm is deterministic, whereas the second is randomized and enjoys a lower computational complexity. When $U$ grows large, the performance of both algorithms approaches that of the state-of-the-art for the case where there is processing rate limits on the jobs. Nonetheless in realistic cases, where $U$ is typically small, the proposed algorithms enjoy a much lower competitive ratio. To carry out the competitive analysis of our algorithms, we present a proof technique, which is novel to the best of our knowledge. This technique could also be used to simplify the competitive analysis of some existing algorithms, and thus could be of independent interest.","PeriodicalId":222290,"journal":{"name":"2018 IEEE/ACM 26th International Symposium on Quality of Service (IWQoS)","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129353394","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Due to widespread popularity of streaming services, many streaming clients typically compete over bottleneck links for their own bandwidth share. However, in such environments, the rate adaptation algorithms used by modern streaming clients often result in instability and unfairness, which negatively affects the playback experience. In addition, mobile clients often waste bandwidth by trying to stream excessively high video bitrates. We present and evaluate a cap-based framework in which the network and clients cooperate to improve the overall Quality of Experience (QoE). First, to motivate the framework, we conduct a comprehensive study using the lab setup showing that a fixed rate cap comes with both benefits (e.g., data savings, improved stability and fairness) and drawbacks (e.g., higher startup times and slower recovery after stalls). To address the drawbacks while keeping the benefits, we then introduce and evaluate a framework that includes (i) buffer-aware rate caps in which the network temporarily boosts the rate cap of clients during video startup and under low buffer conditions, and (ii) boost-aware client-side adaptation algorithms that optimize the bitrate selection during the boost periods. Combined with information sharing between the network and clients, these mechanisms are shown to improve QoE, while reducing wasted bandwidth.
{"title":"Slow but Steady: Cap-Based Client-Network Interaction for Improved Streaming Experience","authors":"Vengatanathan Krishnamoorthi, Niklas Carlsson, Emir Halepovic","doi":"10.1109/IWQoS.2018.8624170","DOIUrl":"https://doi.org/10.1109/IWQoS.2018.8624170","url":null,"abstract":"Due to widespread popularity of streaming services, many streaming clients typically compete over bottleneck links for their own bandwidth share. However, in such environments, the rate adaptation algorithms used by modern streaming clients often result in instability and unfairness, which negatively affects the playback experience. In addition, mobile clients often waste bandwidth by trying to stream excessively high video bitrates. We present and evaluate a cap-based framework in which the network and clients cooperate to improve the overall Quality of Experience (QoE). First, to motivate the framework, we conduct a comprehensive study using the lab setup showing that a fixed rate cap comes with both benefits (e.g., data savings, improved stability and fairness) and drawbacks (e.g., higher startup times and slower recovery after stalls). To address the drawbacks while keeping the benefits, we then introduce and evaluate a framework that includes (i) buffer-aware rate caps in which the network temporarily boosts the rate cap of clients during video startup and under low buffer conditions, and (ii) boost-aware client-side adaptation algorithms that optimize the bitrate selection during the boost periods. Combined with information sharing between the network and clients, these mechanisms are shown to improve QoE, while reducing wasted bandwidth.","PeriodicalId":222290,"journal":{"name":"2018 IEEE/ACM 26th International Symposium on Quality of Service (IWQoS)","volume":"70 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130889270","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-06-01DOI: 10.1109/IWQoS.2018.8624144
Cong Zhang, Jiangchuan Liu, Haitian Pang, Fangxin Wang
Recent years have witnessed an explosion of crowdsourced livecast (i.e., live broadcast) services, in which any Internet users can act as broadcasters to publish livecasts to fellow viewers. To help grow broadcasters' channels, crowdsourced livecast services provide a past-broadcast saving service, allowing viewers to watch the replays they may have missed. Our real-trace measurement and questionnaire survey show that (1) the duration of most of livecasts is extremely long; (2) a much longer duration largely affects the viewers' Quality-of-Experiences (QoE) when watching the replays. To address this issue and improve viewers' QoE, we propose a crowdsourced framework HighCast based on the interactive messages contributed by the viewers in crowdsourced livecast services. According to a highlight-aware detection module, HighCast can exploit the detection results to schedule the content placement by considering the importance of the predicted streaming highlights. The trace-based evaluations illustrate that the proposed framework improves the prediction accuracy and reduces the viewing latency.
{"title":"Highlight-Aware Content Placement in Crowdsourced Livecast Services","authors":"Cong Zhang, Jiangchuan Liu, Haitian Pang, Fangxin Wang","doi":"10.1109/IWQoS.2018.8624144","DOIUrl":"https://doi.org/10.1109/IWQoS.2018.8624144","url":null,"abstract":"Recent years have witnessed an explosion of crowdsourced livecast (i.e., live broadcast) services, in which any Internet users can act as broadcasters to publish livecasts to fellow viewers. To help grow broadcasters' channels, crowdsourced livecast services provide a past-broadcast saving service, allowing viewers to watch the replays they may have missed. Our real-trace measurement and questionnaire survey show that (1) the duration of most of livecasts is extremely long; (2) a much longer duration largely affects the viewers' Quality-of-Experiences (QoE) when watching the replays. To address this issue and improve viewers' QoE, we propose a crowdsourced framework HighCast based on the interactive messages contributed by the viewers in crowdsourced livecast services. According to a highlight-aware detection module, HighCast can exploit the detection results to schedule the content placement by considering the importance of the predicted streaming highlights. The trace-based evaluations illustrate that the proposed framework improves the prediction accuracy and reduces the viewing latency.","PeriodicalId":222290,"journal":{"name":"2018 IEEE/ACM 26th International Symposium on Quality of Service (IWQoS)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128702258","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-06-01DOI: 10.1109/IWQoS.2018.8624117
Jihong Yu, Wei Gong, Jiangchuan Liu, Lin Chen, Fangxin Wang, Haitian Pang
With rapid development of radio frequency identification (RFID) technology, ever-increasing research effort has been dedicated to devising various RFID-enabled services. The key tag monitoring, which is to detect anomaly of key tags, is one of the most important services in such important Internet-of-Things applications as inventory management. Yet prior work assumes that all tags are armed with hashing functionality and a reader would report channel states in every slot, which is not supported by commercial off-the-shelf (COTS) RFID tags and readers. To bridge this gap, this paper is devoted to enabling key tag monitoring service with COTS devices. In particular, we introduce two anomaly monitoring protocols to detect whether there is any key tag absent from the system. The first protocol employs Q-query that works in an analog frame slotted Aloha paradigm to interrogate tags and collect tag IDs. An anomaly event will be found if at least one key tag ID is not present in the collected ones. To reduce time cost of the first protocol resulted from tag collisions, we present a collision-free method that uses select-query to specify a key tag to reply in each slot. Once there is no response in a slot, the specified key tag is regarded as a missing tag. We conduct experiments to evaluate two protocols.
{"title":"Practical Key Tag Monitoring in RFID Systems","authors":"Jihong Yu, Wei Gong, Jiangchuan Liu, Lin Chen, Fangxin Wang, Haitian Pang","doi":"10.1109/IWQoS.2018.8624117","DOIUrl":"https://doi.org/10.1109/IWQoS.2018.8624117","url":null,"abstract":"With rapid development of radio frequency identification (RFID) technology, ever-increasing research effort has been dedicated to devising various RFID-enabled services. The key tag monitoring, which is to detect anomaly of key tags, is one of the most important services in such important Internet-of-Things applications as inventory management. Yet prior work assumes that all tags are armed with hashing functionality and a reader would report channel states in every slot, which is not supported by commercial off-the-shelf (COTS) RFID tags and readers. To bridge this gap, this paper is devoted to enabling key tag monitoring service with COTS devices. In particular, we introduce two anomaly monitoring protocols to detect whether there is any key tag absent from the system. The first protocol employs Q-query that works in an analog frame slotted Aloha paradigm to interrogate tags and collect tag IDs. An anomaly event will be found if at least one key tag ID is not present in the collected ones. To reduce time cost of the first protocol resulted from tag collisions, we present a collision-free method that uses select-query to specify a key tag to reply in each slot. Once there is no response in a slot, the specified key tag is regarded as a missing tag. We conduct experiments to evaluate two protocols.","PeriodicalId":222290,"journal":{"name":"2018 IEEE/ACM 26th International Symposium on Quality of Service (IWQoS)","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125354809","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-06-01DOI: 10.1109/IWQoS.2018.8624179
Nan Geng, Yuan Yang, Mingwei Xu
A fine-grained traffic engineering (TE) that enables per-flow control is considered to be necessary in future Internet. In this paper, we study to realize flow-level TE in conventional networks, where hop-by-hop routing is available, and advanced technologies such as SDN and MPLS are not deployed. Based on analysis and modelling on real Internet traffic, we propose to detect and schedule a few large flows in real time, which dominate the traffic amount. The proposed scheme leverages advanced algorithms for detection, computes the rerouting paths in a centralized server, uses extended OSPF to distribute the routing, and uses a few ACL entries for flow-level forwarding. We formalize the link weight assignment-based large flow scheduling problem and prove that the problem is NP-hard. We develop algorithms to compute the routing and reduce extra LSA number required. We present a set of theoretical results on the TE performance bounds when the number of large flows varies. Experiment and simulation results show that our scheme can reroute large flows within 0.5 second, and the maximum link utilization is within 102% of the optimal solution for source and destination addresses-based flows, while the extra LSA number is small.
{"title":"Flow-Level Traffic Engineering in Conventional Networks with Hop-by-Hop Routing","authors":"Nan Geng, Yuan Yang, Mingwei Xu","doi":"10.1109/IWQoS.2018.8624179","DOIUrl":"https://doi.org/10.1109/IWQoS.2018.8624179","url":null,"abstract":"A fine-grained traffic engineering (TE) that enables per-flow control is considered to be necessary in future Internet. In this paper, we study to realize flow-level TE in conventional networks, where hop-by-hop routing is available, and advanced technologies such as SDN and MPLS are not deployed. Based on analysis and modelling on real Internet traffic, we propose to detect and schedule a few large flows in real time, which dominate the traffic amount. The proposed scheme leverages advanced algorithms for detection, computes the rerouting paths in a centralized server, uses extended OSPF to distribute the routing, and uses a few ACL entries for flow-level forwarding. We formalize the link weight assignment-based large flow scheduling problem and prove that the problem is NP-hard. We develop algorithms to compute the routing and reduce extra LSA number required. We present a set of theoretical results on the TE performance bounds when the number of large flows varies. Experiment and simulation results show that our scheme can reroute large flows within 0.5 second, and the maximum link utilization is within 102% of the optimal solution for source and destination addresses-based flows, while the extra LSA number is small.","PeriodicalId":222290,"journal":{"name":"2018 IEEE/ACM 26th International Symposium on Quality of Service (IWQoS)","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130798352","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-06-01DOI: 10.1109/IWQoS.2018.8624169
Bo Wu, Ke Xu, Qi Li, Zhuotao Liu, Yih-Chun Hu, M. Reed, Meng Shen, F. Yang
The Internet lacks verification of source authenticity and path compliance between the planned packet delivery paths and the real delivery paths, which allows attackers to construct attacks like source spoofing and traffic hijacking attacks. Thus, it is essential to enable source and path verification in networks to detect forwarding anomalies and ensure correct packet delivery. However, most of the existing security mechanisms can only capture anomalies but are unable to locate the detected anomalies. Besides, they incur significant computation and communication overhead, which exacerbates the packet delivery performance. In this paper, we propose a high-efficient packet forwarding verification mechanism called PPV for networks, which verifies packet source and their forwarding paths in real time. PPV enables probabilistic packet marking in routers instead of verifying all packets. Thus, it can efficiently identify forwarding anomalies by verifying markings. Moreover, it localizes packet forwarding anomalies, e.g., malicious routers, by reconstructing packet forwarding paths based on the packet markings. We implement PPV prototype in Click routers and commodity servers, and conducts real experiments in a real testbed built upon the prototype. The experimental results demonstrate the efficiency and performance of PPV. In particular, PPV significantly improves the throughput and the goodput of forwarding verification, and achieves around 2 times and 3 times improvement compared with the-state-of-art OPT scheme, respectively.
{"title":"Enabling Efficient Source and Path Verification via Probabilistic Packet Marking","authors":"Bo Wu, Ke Xu, Qi Li, Zhuotao Liu, Yih-Chun Hu, M. Reed, Meng Shen, F. Yang","doi":"10.1109/IWQoS.2018.8624169","DOIUrl":"https://doi.org/10.1109/IWQoS.2018.8624169","url":null,"abstract":"The Internet lacks verification of source authenticity and path compliance between the planned packet delivery paths and the real delivery paths, which allows attackers to construct attacks like source spoofing and traffic hijacking attacks. Thus, it is essential to enable source and path verification in networks to detect forwarding anomalies and ensure correct packet delivery. However, most of the existing security mechanisms can only capture anomalies but are unable to locate the detected anomalies. Besides, they incur significant computation and communication overhead, which exacerbates the packet delivery performance. In this paper, we propose a high-efficient packet forwarding verification mechanism called PPV for networks, which verifies packet source and their forwarding paths in real time. PPV enables probabilistic packet marking in routers instead of verifying all packets. Thus, it can efficiently identify forwarding anomalies by verifying markings. Moreover, it localizes packet forwarding anomalies, e.g., malicious routers, by reconstructing packet forwarding paths based on the packet markings. We implement PPV prototype in Click routers and commodity servers, and conducts real experiments in a real testbed built upon the prototype. The experimental results demonstrate the efficiency and performance of PPV. In particular, PPV significantly improves the throughput and the goodput of forwarding verification, and achieves around 2 times and 3 times improvement compared with the-state-of-art OPT scheme, respectively.","PeriodicalId":222290,"journal":{"name":"2018 IEEE/ACM 26th International Symposium on Quality of Service (IWQoS)","volume":"55 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116831416","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Deploying Software-Defined Networks (SDNs) faces various challenges, and one of them is to implement per-flow control while preserving data plane scalability. Due to the limited rule storage space of commodity SDN switches, achieving flexible control and having a low-latency data plane with a low storage cost are often at odds. Unfortunately, existing SDN architectures fail to implement per-flow control efficiently: they either incur extra delays to packets or pose high storage burden to switches. In this paper, we propose Software-Defined Label Switching (SDLS) to achieve both data plane scalability and per-flow control. SDLS combines central control with label switching to reduce storage burden while maintaining per-flow control. SDLS introduces software switches into the data plane and manages the network in regions for scalability. SDLS is OpenFlow-compatible and employs a hybrid data plane to provide efficient flow setups. We evaluate SDLS by comparing with the state-of-the-art SDN architectures and show that SDLS can rival the best on the latency performance while reducing the number of flow entries and overflows by more than 47%.
{"title":"Software-Defined Label Switching: Scalable Per-Flow Control in SDN","authors":"Nanyang Huang, Qing Li, Dong Lin, Xiaowen Li, Gengbiao Shen, Yong Jiang","doi":"10.1109/IWQoS.2018.8624177","DOIUrl":"https://doi.org/10.1109/IWQoS.2018.8624177","url":null,"abstract":"Deploying Software-Defined Networks (SDNs) faces various challenges, and one of them is to implement per-flow control while preserving data plane scalability. Due to the limited rule storage space of commodity SDN switches, achieving flexible control and having a low-latency data plane with a low storage cost are often at odds. Unfortunately, existing SDN architectures fail to implement per-flow control efficiently: they either incur extra delays to packets or pose high storage burden to switches. In this paper, we propose Software-Defined Label Switching (SDLS) to achieve both data plane scalability and per-flow control. SDLS combines central control with label switching to reduce storage burden while maintaining per-flow control. SDLS introduces software switches into the data plane and manages the network in regions for scalability. SDLS is OpenFlow-compatible and employs a hybrid data plane to provide efficient flow setups. We evaluate SDLS by comparing with the state-of-the-art SDN architectures and show that SDLS can rival the best on the latency performance while reducing the number of flow entries and overflows by more than 47%.","PeriodicalId":222290,"journal":{"name":"2018 IEEE/ACM 26th International Symposium on Quality of Service (IWQoS)","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124850500","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}