Pub Date : 2014-12-01DOI: 10.1109/PCCC.2014.7017104
Zehao Sun, Zhenyu Zhu, Long Chen, Hongli Xu, Liusheng Huang
With the development of cloud computing, there is an increasing number of market-based mechanisms for cloud resource allocation. Inspired by the emerging group-buying Web sites, we advocate that group-buying can be applied to cloud resource allocation, and thus cloud providers can benefit from demand aggregation due to the advantage of group-buying in attracting customers, while cloud users can enjoy lower price. However, none of the existing allocation mechanisms is specifically designed for the scenario with group-buying, and it is a challenge for mechanism design to take full advantage of group-buying to maximize the total utility. In this paper, we fill this gap by proposing an innovative auction mechanism. The mechanism is designed based on a combinatorial double auction, in which the allocation algorithm and payment scheme are specifically designed to efficiently generate allocation and compute prices considering group-buying. We theoretically prove that the necessary economic properties in auction design, such as individual rationality, budget balance and truthfulness, are satisfied in our work. The experiments show that the proposed mechanism yields higher total utility, and has good scalability.
{"title":"A combinatorial double auction mechanism for cloud resource group-buying","authors":"Zehao Sun, Zhenyu Zhu, Long Chen, Hongli Xu, Liusheng Huang","doi":"10.1109/PCCC.2014.7017104","DOIUrl":"https://doi.org/10.1109/PCCC.2014.7017104","url":null,"abstract":"With the development of cloud computing, there is an increasing number of market-based mechanisms for cloud resource allocation. Inspired by the emerging group-buying Web sites, we advocate that group-buying can be applied to cloud resource allocation, and thus cloud providers can benefit from demand aggregation due to the advantage of group-buying in attracting customers, while cloud users can enjoy lower price. However, none of the existing allocation mechanisms is specifically designed for the scenario with group-buying, and it is a challenge for mechanism design to take full advantage of group-buying to maximize the total utility. In this paper, we fill this gap by proposing an innovative auction mechanism. The mechanism is designed based on a combinatorial double auction, in which the allocation algorithm and payment scheme are specifically designed to efficiently generate allocation and compute prices considering group-buying. We theoretically prove that the necessary economic properties in auction design, such as individual rationality, budget balance and truthfulness, are satisfied in our work. The experiments show that the proposed mechanism yields higher total utility, and has good scalability.","PeriodicalId":105442,"journal":{"name":"2014 IEEE 33rd International Performance Computing and Communications Conference (IPCCC)","volume":"95 ","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134196690","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-12-01DOI: 10.1109/PCCC.2014.7017078
Liang Zhao, Wenzhan Song
This paper presents a new multi-objective microgrid reconfiguration problem formulation. Unlike existing distribution system or microgrid reconfiguration algorithms, we consider the effect of uncertainty arising from the renewable energy generation and investigate the tradeoff between the invented index measuring the reliability of reconfiguration and the total load served. The resulting optimization problem is computationally prohibitive due to the binary circuit breaker variables and the probability constraint accounting for the uncertainty of renewable generation. Nevertheless, a semidefinite programming (SDP) reformulation is developed based on convex relaxation techniques and the scenario-based approximation. Furthermore, weighted-sum method is applied in the reformulation and we eventually obtain the Pareto solution points of the microgrid reconfiguration. Numerical tests validate the intrinsic tradeoff between the two objectives and demonstrate the effectiveness of the proposed solution methodology.
{"title":"A new multi-objective microgrid restoration via semidefinite programming","authors":"Liang Zhao, Wenzhan Song","doi":"10.1109/PCCC.2014.7017078","DOIUrl":"https://doi.org/10.1109/PCCC.2014.7017078","url":null,"abstract":"This paper presents a new multi-objective microgrid reconfiguration problem formulation. Unlike existing distribution system or microgrid reconfiguration algorithms, we consider the effect of uncertainty arising from the renewable energy generation and investigate the tradeoff between the invented index measuring the reliability of reconfiguration and the total load served. The resulting optimization problem is computationally prohibitive due to the binary circuit breaker variables and the probability constraint accounting for the uncertainty of renewable generation. Nevertheless, a semidefinite programming (SDP) reformulation is developed based on convex relaxation techniques and the scenario-based approximation. Furthermore, weighted-sum method is applied in the reformulation and we eventually obtain the Pareto solution points of the microgrid reconfiguration. Numerical tests validate the intrinsic tradeoff between the two objectives and demonstrate the effectiveness of the proposed solution methodology.","PeriodicalId":105442,"journal":{"name":"2014 IEEE 33rd International Performance Computing and Communications Conference (IPCCC)","volume":"92 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132284267","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-12-01DOI: 10.1109/PCCC.2014.7017093
Yiqiang Ding, Wei Zhang
Large on-chip caches with uniform access time are inefficient to be used in multicore processors due to the increasing wire delays across the chip. The Non-Uniform Cache Architecture (NUCA) is proved to be effective to solve the problem of the increasing wire delays in multicore processors. For real-time systems that use multicore processors, it is crucial to bound the worst-case execution time (WCET) accurately and safely. In this paper, we develop a WCET analysis approach to consider the effects of static NUCA caches on WCET, and compare the WCET of the real-time applications in different topologies of the static NUCA caches. The experimental results demonstrate that the static NUCA cache can improve the worst-case performance of the real-time applications in the multicore processor as compared to the cache with uniform access time.
{"title":"WCET analysis of static NUCA caches","authors":"Yiqiang Ding, Wei Zhang","doi":"10.1109/PCCC.2014.7017093","DOIUrl":"https://doi.org/10.1109/PCCC.2014.7017093","url":null,"abstract":"Large on-chip caches with uniform access time are inefficient to be used in multicore processors due to the increasing wire delays across the chip. The Non-Uniform Cache Architecture (NUCA) is proved to be effective to solve the problem of the increasing wire delays in multicore processors. For real-time systems that use multicore processors, it is crucial to bound the worst-case execution time (WCET) accurately and safely. In this paper, we develop a WCET analysis approach to consider the effects of static NUCA caches on WCET, and compare the WCET of the real-time applications in different topologies of the static NUCA caches. The experimental results demonstrate that the static NUCA cache can improve the worst-case performance of the real-time applications in the multicore processor as compared to the cache with uniform access time.","PeriodicalId":105442,"journal":{"name":"2014 IEEE 33rd International Performance Computing and Communications Conference (IPCCC)","volume":"1995 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128227445","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-12-01DOI: 10.1109/PCCC.2014.7017088
Dawei Li, Jie Wu, Zhiyong Liu, Fa Zhang
Two important components that consume the majority of IT power in data centers are the servers and the Data Center Network (DCN). Existing works fail to fully utilize power management techniques on the servers and in the DCN at the same time. In this paper, we jointly consider VM placement on servers with scalable frequencies and flow scheduling in the DCN, to minimize the overall system's power consumption. Due to the convex relation between a server's power consumption and its operating frequency, we prove that, given the number of servers to be used, computation workloads should be allocated to severs in a balanced way, to minimize the power consumption on servers. To reduce the power consumption of the DCN, we further consider the flow requirements among the VMs during VM allocation and assignment. Also, after VM placement, flow consolidation is conducted to reduce the number of active switches and ports. We notice that, choosing the minimum number of servers to accommodate the VMs may result in high power consumption on servers, due to servers' increased operating frequencies. Choosing the optimal number of servers purely based on servers' power consumption leads to reduced power consumption on servers, but may increase power consumption of the DCN. We propose to choose the optimal number of servers to be used, based on the overall system's power consumption. Simulations show that, our joint power optimization method helps to reduce the overall power consumption significantly, and outperforms various existing state-of-the-art methods in terms of reducing the overall system's power consumption.
{"title":"Joint power optimization through VM placement and flow scheduling in data centers","authors":"Dawei Li, Jie Wu, Zhiyong Liu, Fa Zhang","doi":"10.1109/PCCC.2014.7017088","DOIUrl":"https://doi.org/10.1109/PCCC.2014.7017088","url":null,"abstract":"Two important components that consume the majority of IT power in data centers are the servers and the Data Center Network (DCN). Existing works fail to fully utilize power management techniques on the servers and in the DCN at the same time. In this paper, we jointly consider VM placement on servers with scalable frequencies and flow scheduling in the DCN, to minimize the overall system's power consumption. Due to the convex relation between a server's power consumption and its operating frequency, we prove that, given the number of servers to be used, computation workloads should be allocated to severs in a balanced way, to minimize the power consumption on servers. To reduce the power consumption of the DCN, we further consider the flow requirements among the VMs during VM allocation and assignment. Also, after VM placement, flow consolidation is conducted to reduce the number of active switches and ports. We notice that, choosing the minimum number of servers to accommodate the VMs may result in high power consumption on servers, due to servers' increased operating frequencies. Choosing the optimal number of servers purely based on servers' power consumption leads to reduced power consumption on servers, but may increase power consumption of the DCN. We propose to choose the optimal number of servers to be used, based on the overall system's power consumption. Simulations show that, our joint power optimization method helps to reduce the overall power consumption significantly, and outperforms various existing state-of-the-art methods in terms of reducing the overall system's power consumption.","PeriodicalId":105442,"journal":{"name":"2014 IEEE 33rd International Performance Computing and Communications Conference (IPCCC)","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121711171","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-12-01DOI: 10.1109/PCCC.2014.7017020
Mariusz Słabicki, K. Grochla
We present an optimization method for automatic selection of downlink transmit power of LTE eNodeB based on the estimated throughput of the network. The procedures provide self optimized network functions to minimize the inter-cell interferences and maximizing the radio resource utilization. We propose a method based on the expected link throughput based on uniform client spatial distribution and compare our approach with solution based on SINR. We show simulation results that prove that the proposed method gives higher average link rate per client and higher total network throughput than optimization methods shown in the literature.
{"title":"The automatic configuration of transmit power in LTE networks based on throughput estimation","authors":"Mariusz Słabicki, K. Grochla","doi":"10.1109/PCCC.2014.7017020","DOIUrl":"https://doi.org/10.1109/PCCC.2014.7017020","url":null,"abstract":"We present an optimization method for automatic selection of downlink transmit power of LTE eNodeB based on the estimated throughput of the network. The procedures provide self optimized network functions to minimize the inter-cell interferences and maximizing the radio resource utilization. We propose a method based on the expected link throughput based on uniform client spatial distribution and compare our approach with solution based on SINR. We show simulation results that prove that the proposed method gives higher average link rate per client and higher total network throughput than optimization methods shown in the literature.","PeriodicalId":105442,"journal":{"name":"2014 IEEE 33rd International Performance Computing and Communications Conference (IPCCC)","volume":"74 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121551778","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-12-01DOI: 10.1109/PCCC.2014.7017055
Xu Yuan, Yi Shi, Yiwei Thomas Hou, W. Lou, S. Midkiff, S. Kompella
Transparent coexistence, also known as underlay, offers much more efficient spectrum sharing than traditional interweave coexistence paradigm. In a previous work, the transparent coexistence for a multi-hop secondary networks is studied. In this paper, we design a distributed solution to achieve this paradigm. In our design, we show how to increase the number of data streams iteratively while meeting constraints in the MIMO interference cancelation (IC) model and achieving transparent coexistence. All steps in our distributed algorithm can be accomplished based on local information exchange among the neighboring nodes. Our simulation results show that the performance of our distributed algorithm is highly competitive when compared to an upper bound solution for the centralized problem.
{"title":"Achieving transparent coexistence in a multi-hop secondary network through distributed computation","authors":"Xu Yuan, Yi Shi, Yiwei Thomas Hou, W. Lou, S. Midkiff, S. Kompella","doi":"10.1109/PCCC.2014.7017055","DOIUrl":"https://doi.org/10.1109/PCCC.2014.7017055","url":null,"abstract":"Transparent coexistence, also known as underlay, offers much more efficient spectrum sharing than traditional interweave coexistence paradigm. In a previous work, the transparent coexistence for a multi-hop secondary networks is studied. In this paper, we design a distributed solution to achieve this paradigm. In our design, we show how to increase the number of data streams iteratively while meeting constraints in the MIMO interference cancelation (IC) model and achieving transparent coexistence. All steps in our distributed algorithm can be accomplished based on local information exchange among the neighboring nodes. Our simulation results show that the performance of our distributed algorithm is highly competitive when compared to an upper bound solution for the centralized problem.","PeriodicalId":105442,"journal":{"name":"2014 IEEE 33rd International Performance Computing and Communications Conference (IPCCC)","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126258119","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-12-01DOI: 10.1109/PCCC.2014.7017021
Jinbo Xu, Weixia Xu, Kefei Wang, Zhengbin Pang
Last-Level Cache (LLC) plays an important role in Chip Multi-Processor (CMP). The objective of this work is to optimize the structure and management strategy of LLC. Based on 8-core CMP, a LLC structure based on grouped cores is proposed, where 8 cores are divided into 4 groups. All LLC resources are classified into three types, which are fixed private cache, dynamic private cache and dynamic shared cache. The layout of the LLC structure and the corresponding dynamic partitioning strategy are designed to achieve low access latency and high efficiency. Experimental results on full-system simulator suggest that the proposed structure and method are able to reduce the access latency by 2% to 12% compared with previous works, such as tiled structure, cache-centered structure and core-centered structure. Consequently, performance measured by IPC is improved up to 7%. The contribution of this paper is useful for CMP performance, and applies to not only 8-core CMP but also all small-scale CMPs.
{"title":"Low-latency last-level cache structure based on grouped cores in Chip Multi-Processor","authors":"Jinbo Xu, Weixia Xu, Kefei Wang, Zhengbin Pang","doi":"10.1109/PCCC.2014.7017021","DOIUrl":"https://doi.org/10.1109/PCCC.2014.7017021","url":null,"abstract":"Last-Level Cache (LLC) plays an important role in Chip Multi-Processor (CMP). The objective of this work is to optimize the structure and management strategy of LLC. Based on 8-core CMP, a LLC structure based on grouped cores is proposed, where 8 cores are divided into 4 groups. All LLC resources are classified into three types, which are fixed private cache, dynamic private cache and dynamic shared cache. The layout of the LLC structure and the corresponding dynamic partitioning strategy are designed to achieve low access latency and high efficiency. Experimental results on full-system simulator suggest that the proposed structure and method are able to reduce the access latency by 2% to 12% compared with previous works, such as tiled structure, cache-centered structure and core-centered structure. Consequently, performance measured by IPC is improved up to 7%. The contribution of this paper is useful for CMP performance, and applies to not only 8-core CMP but also all small-scale CMPs.","PeriodicalId":105442,"journal":{"name":"2014 IEEE 33rd International Performance Computing and Communications Conference (IPCCC)","volume":"57 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114185738","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-12-01DOI: 10.1109/PCCC.2014.7017110
Yousuk Seung, Quan Leng, Wei Dong, L. Qiu, Yin Zhang
Despite significant advances, supporting high-quality large video conferences at a low cost remains a significant challenge due to stringent performance requirements, limited and heterogeneous client resources, and dynamic traffic demands. In this paper, we develop a simple yet effective valiant multicast routing to select application-layer routes and adapt streaming rates according to the current network condition. It consists of four novel components: (i) a valiant multicast routing using two random choices to effectively balance the load in the presence of uncertainty about the clients' load, (ii) a scheme to cluster clients based on their delay and adapt valiant multicast routing based on both upload capacity and locality, (iii) an approach to further leverage resources from other peers or nodes in content distribution network (CDN) to enhance performance, and (iv) a simple distributed scheme to adapt streaming rates according to the current network resources. Our real implementation and experiments show that our approach significantly out-performs existing multicast routing schemes and quickly adapts to changing traffic demands and network conditions.
{"title":"Randomized routing in multi-party internet video conferencing","authors":"Yousuk Seung, Quan Leng, Wei Dong, L. Qiu, Yin Zhang","doi":"10.1109/PCCC.2014.7017110","DOIUrl":"https://doi.org/10.1109/PCCC.2014.7017110","url":null,"abstract":"Despite significant advances, supporting high-quality large video conferences at a low cost remains a significant challenge due to stringent performance requirements, limited and heterogeneous client resources, and dynamic traffic demands. In this paper, we develop a simple yet effective valiant multicast routing to select application-layer routes and adapt streaming rates according to the current network condition. It consists of four novel components: (i) a valiant multicast routing using two random choices to effectively balance the load in the presence of uncertainty about the clients' load, (ii) a scheme to cluster clients based on their delay and adapt valiant multicast routing based on both upload capacity and locality, (iii) an approach to further leverage resources from other peers or nodes in content distribution network (CDN) to enhance performance, and (iv) a simple distributed scheme to adapt streaming rates according to the current network resources. Our real implementation and experiments show that our approach significantly out-performs existing multicast routing schemes and quickly adapts to changing traffic demands and network conditions.","PeriodicalId":105442,"journal":{"name":"2014 IEEE 33rd International Performance Computing and Communications Conference (IPCCC)","volume":"354 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115927938","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-12-01DOI: 10.1109/PCCC.2014.7017042
C. Ameur, Emmanuel Mory, Bernard A. Cousin
In this paper, we describe a new method, called RWTM (Receive Window Tuning Method) that shapes HTTP adaptive streams. It employs the flow control in the gateway to improve the quality of experience (QoE) of users. Our use case is when two HTTP Adaptive streaming clients are competing for bandwidth in the same home network. Results show that our proposed method considerably improves the QoE; it improves the video stability, the fidelity to optimal video quality level selection and the convergence speed to the optimal video quality level.
{"title":"Shaping HTTP adaptive streams using receive window tuning method in home gateway","authors":"C. Ameur, Emmanuel Mory, Bernard A. Cousin","doi":"10.1109/PCCC.2014.7017042","DOIUrl":"https://doi.org/10.1109/PCCC.2014.7017042","url":null,"abstract":"In this paper, we describe a new method, called RWTM (Receive Window Tuning Method) that shapes HTTP adaptive streams. It employs the flow control in the gateway to improve the quality of experience (QoE) of users. Our use case is when two HTTP Adaptive streaming clients are competing for bandwidth in the same home network. Results show that our proposed method considerably improves the QoE; it improves the video stability, the fidelity to optimal video quality level selection and the convergence speed to the optimal video quality level.","PeriodicalId":105442,"journal":{"name":"2014 IEEE 33rd International Performance Computing and Communications Conference (IPCCC)","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114962654","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-12-01DOI: 10.1109/PCCC.2014.7017100
Tao Chen, Kai Lei, Kuai Xu
The new named data networking (NDN) has shifted the Internet from today's IP-based packet-delivery model to the name-based data retrieval model. The architecture shift from IP addresses to named data results in effective content delivery via in-networking cache and direct object retrieval. However, this shift has also created challenges and obstacles for securing data objects and providing appropriate access control on named data due to broad data replications and the loss of network perimeters. This paper designs, implements, and evaluates an encryption and probability based access control model for NDN with video streaming service as a case study. In particularly, we explore a combination of public-key cryptography and symmetric ciphers to encrypt video data for preventing unauthorized access. In addition, we build a bloom-filter probabilistic data structure for pre-filtering Interests from consumers without desired credentials. Our experimental results have demonstrated the capabilities of the proposed model for providing access control while incurring low system and performance overhead on producers and consumers.
{"title":"An encryption and probability based access control model for named data networking","authors":"Tao Chen, Kai Lei, Kuai Xu","doi":"10.1109/PCCC.2014.7017100","DOIUrl":"https://doi.org/10.1109/PCCC.2014.7017100","url":null,"abstract":"The new named data networking (NDN) has shifted the Internet from today's IP-based packet-delivery model to the name-based data retrieval model. The architecture shift from IP addresses to named data results in effective content delivery via in-networking cache and direct object retrieval. However, this shift has also created challenges and obstacles for securing data objects and providing appropriate access control on named data due to broad data replications and the loss of network perimeters. This paper designs, implements, and evaluates an encryption and probability based access control model for NDN with video streaming service as a case study. In particularly, we explore a combination of public-key cryptography and symmetric ciphers to encrypt video data for preventing unauthorized access. In addition, we build a bloom-filter probabilistic data structure for pre-filtering Interests from consumers without desired credentials. Our experimental results have demonstrated the capabilities of the proposed model for providing access control while incurring low system and performance overhead on producers and consumers.","PeriodicalId":105442,"journal":{"name":"2014 IEEE 33rd International Performance Computing and Communications Conference (IPCCC)","volume":"187 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123518763","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}