Pub Date : 2015-08-24DOI: 10.1109/INFOCOM.2015.7218452
Eugene Chai, K. Shin, Sung-ju Lee, Jeongkeun Lee, R. Etkin
Cloud-RANs (Radio Access Networks) assume the existence of a high-capacity, low-delay/latency fronthaul to support cooperative transmission schemes such as CoMP (Coordinated Multi-Point) and coordinated beamforming. However, building such hierarchical wired fronthauls is challenging as the typical I/Q data stream is non-elastic - I/Q data over the wired fronthaul has little tolerance for delay jitters and zero tolerance for losses. Any distortion to the I/Q data stream will make the resulting wireless transmission completely unintelligible. We propose Spiro, a mechanism that efficiently transports RF signals over a wired fronthaul network. The primary goal of Spiro is to make I/Q data streams elastic and resilient to unexpected network condition changes. This is accomplished through a novel combination of compression and data prioritization of I/Q data on the wired fronthaul. For a given wireless throughput, Spiro can reduce the bandwidth demand of the fronthaul data stream by up to 50% without any noticeable degradation in the wireless reception quality. Further bandwidth reduction via compression and frame losses only have a limited impact on the wireless throughput.
{"title":"SPIRO: Turning elephants into mice with efficient RF transport","authors":"Eugene Chai, K. Shin, Sung-ju Lee, Jeongkeun Lee, R. Etkin","doi":"10.1109/INFOCOM.2015.7218452","DOIUrl":"https://doi.org/10.1109/INFOCOM.2015.7218452","url":null,"abstract":"Cloud-RANs (Radio Access Networks) assume the existence of a high-capacity, low-delay/latency fronthaul to support cooperative transmission schemes such as CoMP (Coordinated Multi-Point) and coordinated beamforming. However, building such hierarchical wired fronthauls is challenging as the typical I/Q data stream is non-elastic - I/Q data over the wired fronthaul has little tolerance for delay jitters and zero tolerance for losses. Any distortion to the I/Q data stream will make the resulting wireless transmission completely unintelligible. We propose Spiro, a mechanism that efficiently transports RF signals over a wired fronthaul network. The primary goal of Spiro is to make I/Q data streams elastic and resilient to unexpected network condition changes. This is accomplished through a novel combination of compression and data prioritization of I/Q data on the wired fronthaul. For a given wireless throughput, Spiro can reduce the bandwidth demand of the fronthaul data stream by up to 50% without any noticeable degradation in the wireless reception quality. Further bandwidth reduction via compression and frame losses only have a limited impact on the wireless throughput.","PeriodicalId":342583,"journal":{"name":"2015 IEEE Conference on Computer Communications (INFOCOM)","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-08-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123454196","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-08-24DOI: 10.1109/INFOCOM.2015.7218636
Romain Fontugne, J. Mazel, K. Fukuda
Monitoring delays in the Internet is essential to understand the network condition and ensure the good functioning of time-sensitive applications. Large-scale measurements of round-trip time (RTT) are promising data sources to gain better insights into Internet-wide delays. However, the lack of efficient methodology to model RTTs prevents researchers from leveraging the value of these datasets. In this work, we propose a log-normal mixture model to identify, characterize, and monitor spatial and temporal dynamics of RTTs. This data-driven approach provides a coarse grained view of numerous RTTs in the form of a graph, thus, it enables efficient and systematic analysis of Internet-wide measurements. Using this model, we analyze more than 13 years of RTTs from about 12 millions unique IP addresses in passively measured backbone traffic traces. We evaluate the proposed method by comparison with external data sets, and present examples where the proposed model highlights interesting delay fluctuations due to route changes or congestion. We also introduce an application based on the proposed model to identify hosts deviating from their typical RTTs fluctuations, and we envision various applications for this empirical model.
{"title":"An empirical mixture model for large-scale RTT measurements","authors":"Romain Fontugne, J. Mazel, K. Fukuda","doi":"10.1109/INFOCOM.2015.7218636","DOIUrl":"https://doi.org/10.1109/INFOCOM.2015.7218636","url":null,"abstract":"Monitoring delays in the Internet is essential to understand the network condition and ensure the good functioning of time-sensitive applications. Large-scale measurements of round-trip time (RTT) are promising data sources to gain better insights into Internet-wide delays. However, the lack of efficient methodology to model RTTs prevents researchers from leveraging the value of these datasets. In this work, we propose a log-normal mixture model to identify, characterize, and monitor spatial and temporal dynamics of RTTs. This data-driven approach provides a coarse grained view of numerous RTTs in the form of a graph, thus, it enables efficient and systematic analysis of Internet-wide measurements. Using this model, we analyze more than 13 years of RTTs from about 12 millions unique IP addresses in passively measured backbone traffic traces. We evaluate the proposed method by comparison with external data sets, and present examples where the proposed model highlights interesting delay fluctuations due to route changes or congestion. We also introduce an application based on the proposed model to identify hosts deviating from their typical RTTs fluctuations, and we envision various applications for this empirical model.","PeriodicalId":342583,"journal":{"name":"2015 IEEE Conference on Computer Communications (INFOCOM)","volume":"261 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-08-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122466461","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-08-24DOI: 10.1109/INFOCOM.2015.7218570
Yong Li, Wei Gao
Mobile Cloud Computing (MCC) is of particular importance to address the contradiction between the increasing complexity of user applications and the limited lifespan of mobile device's battery, by offloading the computational workloads from local devices to the remote cloud. Current offloading schemes either require the programmer's annotations, which restricts its wide application; or transmits too much unnecessary data, resulting bandwidth and energy waste. In this paper, we propose a novel method-level offloading methodology to offload local computational workload with as least data transmission as possible. Our basic idea is to identify the contexts which are necessary to the method execution by parsing application binaries in advance and applying this parsing result to selectively migrate heap data while allowing successful method execution remotely. Our implementation of this design is built upon Dalvik Virtual Machine. Our experiments and evaluation against applications downloaded from Google Play show that our approach can save data transmission significantly comparing to existing schemes.
{"title":"Code offload with least context migration in the mobile cloud","authors":"Yong Li, Wei Gao","doi":"10.1109/INFOCOM.2015.7218570","DOIUrl":"https://doi.org/10.1109/INFOCOM.2015.7218570","url":null,"abstract":"Mobile Cloud Computing (MCC) is of particular importance to address the contradiction between the increasing complexity of user applications and the limited lifespan of mobile device's battery, by offloading the computational workloads from local devices to the remote cloud. Current offloading schemes either require the programmer's annotations, which restricts its wide application; or transmits too much unnecessary data, resulting bandwidth and energy waste. In this paper, we propose a novel method-level offloading methodology to offload local computational workload with as least data transmission as possible. Our basic idea is to identify the contexts which are necessary to the method execution by parsing application binaries in advance and applying this parsing result to selectively migrate heap data while allowing successful method execution remotely. Our implementation of this design is built upon Dalvik Virtual Machine. Our experiments and evaluation against applications downloaded from Google Play show that our approach can save data transmission significantly comparing to existing schemes.","PeriodicalId":342583,"journal":{"name":"2015 IEEE Conference on Computer Communications (INFOCOM)","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-08-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125241966","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-08-24DOI: 10.1109/INFOCOM.2015.7218548
Xiulong Liu, Bin Xiao, Keqiu Li, Jie Wu, A. Liu, Heng Qi, Xin Xie
The widely used RFID tags impose serious privacy concerns as a tag responds to queries from readers no matter they are authorized or not. The common solution is to use a commercially available blocker tag which behaves as if a set of tags with known blocking IDs are present. The use of blocker tags makes RFID estimation much more challenging as some genuine tag IDs are covered by the blocker tag and some are not. In this paper, we propose REB, the first RFID estimation scheme with the presence of blocker tags. REB uses the framed slotted Aloha protocol specified in the C1G2 standard. For each round of the Aloha protocol, REB first executes the protocol on the genuine tags and the blocker tag, and then virtually executes the protocol on the known blocking IDs using the same Aloha protocol parameters. The basic idea of REB is to conduct statistically inference from the two sets of responses and estimate the number of genuine tags. We conduct extensive simulations to evaluate the performance of REB, in terms of time-efficiency and estimation reliability. The experimental results reveal that our REB scheme runs tens of times faster than the fastest identification protocol with the same accuracy requirement.
{"title":"RFID cardinality estimation with blocker tags","authors":"Xiulong Liu, Bin Xiao, Keqiu Li, Jie Wu, A. Liu, Heng Qi, Xin Xie","doi":"10.1109/INFOCOM.2015.7218548","DOIUrl":"https://doi.org/10.1109/INFOCOM.2015.7218548","url":null,"abstract":"The widely used RFID tags impose serious privacy concerns as a tag responds to queries from readers no matter they are authorized or not. The common solution is to use a commercially available blocker tag which behaves as if a set of tags with known blocking IDs are present. The use of blocker tags makes RFID estimation much more challenging as some genuine tag IDs are covered by the blocker tag and some are not. In this paper, we propose REB, the first RFID estimation scheme with the presence of blocker tags. REB uses the framed slotted Aloha protocol specified in the C1G2 standard. For each round of the Aloha protocol, REB first executes the protocol on the genuine tags and the blocker tag, and then virtually executes the protocol on the known blocking IDs using the same Aloha protocol parameters. The basic idea of REB is to conduct statistically inference from the two sets of responses and estimate the number of genuine tags. We conduct extensive simulations to evaluate the performance of REB, in terms of time-efficiency and estimation reliability. The experimental results reveal that our REB scheme runs tens of times faster than the fastest identification protocol with the same accuracy requirement.","PeriodicalId":342583,"journal":{"name":"2015 IEEE Conference on Computer Communications (INFOCOM)","volume":"37 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-08-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123354168","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-08-24DOI: 10.1109/INFOCOM.2015.7218645
Tal Mizrahi, Ori Rottenstreich, Y. Moses
Network configuration and policy updates occur frequently, and must be performed in a way that minimizes transient effects caused by intermediate states of the network. It has been shown that accurate time can be used for coordinating network-wide updates, thereby reducing temporary inconsistencies. However, this approach presents a great challenge; even if network devices have perfectly synchronized clocks, how can we guarantee that updates are performed at the exact time for which they were scheduled? In this paper we present a practical method for implementing accurate time-based updates, using TIMEFLIPs. A TimeFlip is a time-based update that is implemented using a timestamp field in a Ternary Content Addressable Memory (TCAM) entry. TIMEFLIPs can be used to implement Atomic Bundle updates, and to coordinate network updates with high accuracy. We analyze the amount of TCAM resources required to encode a TimeFlip, and show that if there is enough flexibility in determining the scheduled time, a TimeFlip can be encoded by a single TCAM entry, using a single bit to represent the timestamp, and allowing the update to be performed with an accuracy on the order of 1 microsecond.
{"title":"TimeFlip: Scheduling network updates with timestamp-based TCAM ranges","authors":"Tal Mizrahi, Ori Rottenstreich, Y. Moses","doi":"10.1109/INFOCOM.2015.7218645","DOIUrl":"https://doi.org/10.1109/INFOCOM.2015.7218645","url":null,"abstract":"Network configuration and policy updates occur frequently, and must be performed in a way that minimizes transient effects caused by intermediate states of the network. It has been shown that accurate time can be used for coordinating network-wide updates, thereby reducing temporary inconsistencies. However, this approach presents a great challenge; even if network devices have perfectly synchronized clocks, how can we guarantee that updates are performed at the exact time for which they were scheduled? In this paper we present a practical method for implementing accurate time-based updates, using TIMEFLIPs. A TimeFlip is a time-based update that is implemented using a timestamp field in a Ternary Content Addressable Memory (TCAM) entry. TIMEFLIPs can be used to implement Atomic Bundle updates, and to coordinate network updates with high accuracy. We analyze the amount of TCAM resources required to encode a TimeFlip, and show that if there is enough flexibility in determining the scheduled time, a TimeFlip can be encoded by a single TCAM entry, using a single bit to represent the timestamp, and allowing the update to be performed with an accuracy on the order of 1 microsecond.","PeriodicalId":342583,"journal":{"name":"2015 IEEE Conference on Computer Communications (INFOCOM)","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-08-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131453848","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-08-24DOI: 10.1109/INFOCOM.2015.7218612
M. Karaliopoulos, Orestis Telelis, I. Koutsopoulos
We look into the realization of mobile crowdsensing campaigns that draw on the opportunistic networking paradigm, as practised in delay-tolerant networks but also in the emerging device-to-device communication mode in cellular networks. In particular, we ask how mobile users can be optimally selected in order to generate the required space-time paths across the network for collecting data from a set of fixed locations. The users hold different roles in these paths, from collecting data with their sensing-enabled devices to relaying them across the network and uploading them to data collection points with Internet connectivity. We first consider scenarios with deterministic node mobility and formulate the selection of users as a minimum-cost set cover problem with a submodular objective function. We then generalize to more realistic settings with uncertainty about the user mobility. A methodology is devised for translating the statistics of individual user mobility to statistics of spacetime path formation and feeding them to the set cover problem formulation. We describe practical greedy heuristics for the resulting NP-hard problems and compute their approximation ratios. Our experimentation with real mobility datasets (a) illustrates the multiple tradeoffs between the campaign cost and duration, the bound on the hopcount of space-time paths, and the number of collection points; and (b) provides evidence that in realistic problem instances the heuristics perform much better than what their pessimistic worst-case bounds suggest.
{"title":"User recruitment for mobile crowdsensing over opportunistic networks","authors":"M. Karaliopoulos, Orestis Telelis, I. Koutsopoulos","doi":"10.1109/INFOCOM.2015.7218612","DOIUrl":"https://doi.org/10.1109/INFOCOM.2015.7218612","url":null,"abstract":"We look into the realization of mobile crowdsensing campaigns that draw on the opportunistic networking paradigm, as practised in delay-tolerant networks but also in the emerging device-to-device communication mode in cellular networks. In particular, we ask how mobile users can be optimally selected in order to generate the required space-time paths across the network for collecting data from a set of fixed locations. The users hold different roles in these paths, from collecting data with their sensing-enabled devices to relaying them across the network and uploading them to data collection points with Internet connectivity. We first consider scenarios with deterministic node mobility and formulate the selection of users as a minimum-cost set cover problem with a submodular objective function. We then generalize to more realistic settings with uncertainty about the user mobility. A methodology is devised for translating the statistics of individual user mobility to statistics of spacetime path formation and feeding them to the set cover problem formulation. We describe practical greedy heuristics for the resulting NP-hard problems and compute their approximation ratios. Our experimentation with real mobility datasets (a) illustrates the multiple tradeoffs between the campaign cost and duration, the bound on the hopcount of space-time paths, and the number of collection points; and (b) provides evidence that in realistic problem instances the heuristics perform much better than what their pessimistic worst-case bounds suggest.","PeriodicalId":342583,"journal":{"name":"2015 IEEE Conference on Computer Communications (INFOCOM)","volume":"368 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-08-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124616803","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-08-24DOI: 10.1109/INFOCOM.2015.7218474
Ben Niu, Qinghua Li, Xiao-yan Zhu, G. Cao, Hui Li
Privacy protection is critical for Location-Based Services (LBSs). In most previous solutions, users query service data from the untrusted LBS server when needed, and discard the data immediately after use. However, the data can be cached and reused to answer future queries. This prevents some queries from being sent to the LBS server and thus improves privacy. Although a few previous works recognize the usefulness of caching for better privacy, they use caching in a pretty straightforward way, and do not show the quantitative relation between caching and privacy. In this paper, we propose a caching-based solution to protect location privacy in LBSs, and rigorously explore how much caching can be used to improve privacy. Specifically, we propose an entropy-based privacy metric which for the first time incorporates the effect of caching on privacy. Then we design two novel caching-aware dummy selection algorithms which enhance location privacy through maximizing both the privacy of the current query and the dummies' contribution to cache. Evaluations show that our algorithms provide much better privacy than previous caching-oblivious and caching-aware solutions.
{"title":"Enhancing privacy through caching in location-based services","authors":"Ben Niu, Qinghua Li, Xiao-yan Zhu, G. Cao, Hui Li","doi":"10.1109/INFOCOM.2015.7218474","DOIUrl":"https://doi.org/10.1109/INFOCOM.2015.7218474","url":null,"abstract":"Privacy protection is critical for Location-Based Services (LBSs). In most previous solutions, users query service data from the untrusted LBS server when needed, and discard the data immediately after use. However, the data can be cached and reused to answer future queries. This prevents some queries from being sent to the LBS server and thus improves privacy. Although a few previous works recognize the usefulness of caching for better privacy, they use caching in a pretty straightforward way, and do not show the quantitative relation between caching and privacy. In this paper, we propose a caching-based solution to protect location privacy in LBSs, and rigorously explore how much caching can be used to improve privacy. Specifically, we propose an entropy-based privacy metric which for the first time incorporates the effect of caching on privacy. Then we design two novel caching-aware dummy selection algorithms which enhance location privacy through maximizing both the privacy of the current query and the dummies' contribution to cache. Evaluations show that our algorithms provide much better privacy than previous caching-oblivious and caching-aware solutions.","PeriodicalId":342583,"journal":{"name":"2015 IEEE Conference on Computer Communications (INFOCOM)","volume":"361 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-08-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120964611","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-08-24DOI: 10.1109/INFOCOM.2015.7218516
M. Shatnawi, M. Hefeeda
Current data mining techniques used to create failure predictors for online services require massive amounts of data to build, train, and test the predictors. These operations are tedious, time consuming, and are not done in real-time. Also, the accuracy of the resulting predictor is highly compromised by changes that affect the environment and working conditions of the predictor. We propose a new approach to creating a dynamic failure predictor for online services in real-time and keeping its accuracy high during the services run-time changes. We use synthetic transactions during the run-time lifecycle to generate current data about the service. This data is used in its ephemeral state to build, train, test, and maintain an up-to-date failure predictor. We implemented the proposed approach in a large-scale online ad service that processes billions of requests each month in six data centers distributed in three continents. We show that the proposed predictor is able to maintain failure prediction accuracy as high as 86% during online service changes, whereas the accuracy of the state-of-the-art predictors may drop to less than 10%.
{"title":"Real-time failure prediction in online services","authors":"M. Shatnawi, M. Hefeeda","doi":"10.1109/INFOCOM.2015.7218516","DOIUrl":"https://doi.org/10.1109/INFOCOM.2015.7218516","url":null,"abstract":"Current data mining techniques used to create failure predictors for online services require massive amounts of data to build, train, and test the predictors. These operations are tedious, time consuming, and are not done in real-time. Also, the accuracy of the resulting predictor is highly compromised by changes that affect the environment and working conditions of the predictor. We propose a new approach to creating a dynamic failure predictor for online services in real-time and keeping its accuracy high during the services run-time changes. We use synthetic transactions during the run-time lifecycle to generate current data about the service. This data is used in its ephemeral state to build, train, test, and maintain an up-to-date failure predictor. We implemented the proposed approach in a large-scale online ad service that processes billions of requests each month in six data centers distributed in three continents. We show that the proposed predictor is able to maintain failure prediction accuracy as high as 86% during online service changes, whereas the accuracy of the state-of-the-art predictors may drop to less than 10%.","PeriodicalId":342583,"journal":{"name":"2015 IEEE Conference on Computer Communications (INFOCOM)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-08-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129663311","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-08-24DOI: 10.1109/INFOCOM.2015.7218401
Kaixin Sui, Youjian Zhao, Dan Pei, Zimu Li
Enterprise 802.11 Network (EWLAN) is an important infrastructure to the Mobile Internet, but its performance is being significantly impacted by the ever-increasing Rogue access points (RAPs). For example, in the university EWLAN we studied, the number of RAPs is more than seven times that of the enterprise APs. In this paper, we propose a generic methodology to measure RAP's carrier sense interference and hidden terminal interference, and it only uses readily available SNMP metrics, without any additional measurement hardware. Our results show that, on average, the carrier sense interference due to RAPs causes only 5% access delay increase at the MAC layer, because of careful engineering and software optimization. However, hidden terminal interference due to RAPs causes (a much more severe) up to 30% MAC layer loss rate increase on average, because no existing approach has explicitly dealt with the hidden terminal impact from rogue APs. Overall, the RAP interference would increase the IP layer delay at the WiFi hop by up to 50%.
{"title":"How bad are the rogues' impact on enterprise 802.11 network performance?","authors":"Kaixin Sui, Youjian Zhao, Dan Pei, Zimu Li","doi":"10.1109/INFOCOM.2015.7218401","DOIUrl":"https://doi.org/10.1109/INFOCOM.2015.7218401","url":null,"abstract":"Enterprise 802.11 Network (EWLAN) is an important infrastructure to the Mobile Internet, but its performance is being significantly impacted by the ever-increasing Rogue access points (RAPs). For example, in the university EWLAN we studied, the number of RAPs is more than seven times that of the enterprise APs. In this paper, we propose a generic methodology to measure RAP's carrier sense interference and hidden terminal interference, and it only uses readily available SNMP metrics, without any additional measurement hardware. Our results show that, on average, the carrier sense interference due to RAPs causes only 5% access delay increase at the MAC layer, because of careful engineering and software optimization. However, hidden terminal interference due to RAPs causes (a much more severe) up to 30% MAC layer loss rate increase on average, because no existing approach has explicitly dealt with the hidden terminal impact from rogue APs. Overall, the RAP interference would increase the IP layer delay at the WiFi hop by up to 50%.","PeriodicalId":342583,"journal":{"name":"2015 IEEE Conference on Computer Communications (INFOCOM)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-08-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128226643","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-08-24DOI: 10.1109/INFOCOM.2015.7218489
Qiao Xiang, X. Chen, L. Kong, Lei Rao, Xue Liu
Vehicle-to-vehicle safety data dissemination plays an increasingly important role in ensuring the safety and efficiency of vehicle transportation. When collecting safety data, vehicles always prefer data generated at a closer location over data generated at a distant location, and prefer recent data over outdated data. However, these data preferences have been overlooked in most of existing safety data dissemination protocols, preventing vehicles getting more precise traffic information. In this paper, we explore the feasibility and benefits of incorporating the data preferences of vehicles in designing efficient safety data dissemination protocols. In particular, we propose the concept of packet-value to quantify these data preferences. We then design PVCast, a packet-value-based safety data dissemination protocol in VANET. PVCast makes the dissemination decision for each packet based on its packet-value and effective dissemination coverage in order to satisfy the data preferences of all the vehicles in the network. In addition, PVCast is lightweight and fully distributed. We evaluate the performance of PVCast on the ns-2 platform by comparing it with three representative data dissemination protocols. Simulation results in a typical highway scenario show that PVCast provides a significant improvement on per-vehicle throughput, per-packet dissemination coverage with small per-packet delay. Our findings demonstrate the importance and necessity of comprehensively considering the data preferences of vehicles when designing an efficient safety data dissemination protocol for VANET.
{"title":"Data preference matters: A new perspective of safety data dissemination in vehicular ad hoc networks","authors":"Qiao Xiang, X. Chen, L. Kong, Lei Rao, Xue Liu","doi":"10.1109/INFOCOM.2015.7218489","DOIUrl":"https://doi.org/10.1109/INFOCOM.2015.7218489","url":null,"abstract":"Vehicle-to-vehicle safety data dissemination plays an increasingly important role in ensuring the safety and efficiency of vehicle transportation. When collecting safety data, vehicles always prefer data generated at a closer location over data generated at a distant location, and prefer recent data over outdated data. However, these data preferences have been overlooked in most of existing safety data dissemination protocols, preventing vehicles getting more precise traffic information. In this paper, we explore the feasibility and benefits of incorporating the data preferences of vehicles in designing efficient safety data dissemination protocols. In particular, we propose the concept of packet-value to quantify these data preferences. We then design PVCast, a packet-value-based safety data dissemination protocol in VANET. PVCast makes the dissemination decision for each packet based on its packet-value and effective dissemination coverage in order to satisfy the data preferences of all the vehicles in the network. In addition, PVCast is lightweight and fully distributed. We evaluate the performance of PVCast on the ns-2 platform by comparing it with three representative data dissemination protocols. Simulation results in a typical highway scenario show that PVCast provides a significant improvement on per-vehicle throughput, per-packet dissemination coverage with small per-packet delay. Our findings demonstrate the importance and necessity of comprehensively considering the data preferences of vehicles when designing an efficient safety data dissemination protocol for VANET.","PeriodicalId":342583,"journal":{"name":"2015 IEEE Conference on Computer Communications (INFOCOM)","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-08-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131655227","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}