Pub Date : 2015-08-24DOI: 10.1109/INFOCOM.2015.7218622
S. Zhang, Zhuzhong Qian, Fanyu Kong, Jie Wu, Sanglu Lu
Wireless power transfer is a promising technology to extend the lifetime of, and thus enhance the usability of, the energy-hungry battery-powered devices. It enables energy to be wirelessly transmitted from power chargers to energy receiving devices. Existing studies have mainly focused on maximizing network lifetime, optimizing charging efficiency, minimizing charging delay, etc. Different from these works, our objective is to optimize charging quality in a 2-D target area. Specifically, we consider the following charger Placement and Power allocation Problem (P3): Given a set of candidate locations for placing chargers, find a charger placement and a corresponding power allocation to maximize the charging quality, subject to a power budget. We prove that P3 is NP-complete. We first study P3 with fixed power levels, for which we propose a (1-1/e)-approximation algorithm; we then design an approximation algorithm of factor 1-1/e / 2L for P3, where e is the base of the natural logarithm, and L is the maximum power level of a charger. We also show how to extend P3 in a cycle. Extensive simulations demonstrate that, the gap between our design and the optimal algorithm is within 4.5%, validating our theoretical results.
{"title":"P3: Joint optimization of charger placement and power allocation for wireless power transfer","authors":"S. Zhang, Zhuzhong Qian, Fanyu Kong, Jie Wu, Sanglu Lu","doi":"10.1109/INFOCOM.2015.7218622","DOIUrl":"https://doi.org/10.1109/INFOCOM.2015.7218622","url":null,"abstract":"Wireless power transfer is a promising technology to extend the lifetime of, and thus enhance the usability of, the energy-hungry battery-powered devices. It enables energy to be wirelessly transmitted from power chargers to energy receiving devices. Existing studies have mainly focused on maximizing network lifetime, optimizing charging efficiency, minimizing charging delay, etc. Different from these works, our objective is to optimize charging quality in a 2-D target area. Specifically, we consider the following charger Placement and Power allocation Problem (P3): Given a set of candidate locations for placing chargers, find a charger placement and a corresponding power allocation to maximize the charging quality, subject to a power budget. We prove that P3 is NP-complete. We first study P3 with fixed power levels, for which we propose a (1-1/e)-approximation algorithm; we then design an approximation algorithm of factor 1-1/e / 2L for P3, where e is the base of the natural logarithm, and L is the maximum power level of a charger. We also show how to extend P3 in a cycle. Extensive simulations demonstrate that, the gap between our design and the optimal algorithm is within 4.5%, validating our theoretical results.","PeriodicalId":342583,"journal":{"name":"2015 IEEE Conference on Computer Communications (INFOCOM)","volume":"101 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-08-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134410563","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-08-24DOI: 10.1109/INFOCOM.2015.7218617
Christopher G. Brinton, M. Chiang
We study student performance prediction in Massive Open Online Courses (MOOCs), where the objective is to predict whether a user will be Correct on First Attempt (CFA) in answering a question. In doing so, we develop novel techniques that leverage behavioral data collected by MOOC platforms. Using video-watching clickstream data from one of our MOOCs, we first extract summary quantities (e.g., fraction played, number of pauses) for each user-video pair, and show how certain intervals/sets of values for these behaviors quantify that a pair is more likely to be CFA or not for the corresponding question. Motivated by these findings, our methods are designed to determine suitable intervals from training data and to use the corresponding success estimates as learning features in prediction algorithms. Tested against a large set of empirical data, we find that our schemes outperform standard algorithms (i.e., without behavioral data) for all datasets and metrics tested. Moreover, the improvement is particularly pronounced when considering the first few course weeks, demonstrating the “early detection” capability of such clickstream data. We also discuss how CFA prediction can be used to depict graphs of the Social Learning Network (SLN) of students, which can help instructors manage courses more effectively.
{"title":"MOOC performance prediction via clickstream data and social learning networks","authors":"Christopher G. Brinton, M. Chiang","doi":"10.1109/INFOCOM.2015.7218617","DOIUrl":"https://doi.org/10.1109/INFOCOM.2015.7218617","url":null,"abstract":"We study student performance prediction in Massive Open Online Courses (MOOCs), where the objective is to predict whether a user will be Correct on First Attempt (CFA) in answering a question. In doing so, we develop novel techniques that leverage behavioral data collected by MOOC platforms. Using video-watching clickstream data from one of our MOOCs, we first extract summary quantities (e.g., fraction played, number of pauses) for each user-video pair, and show how certain intervals/sets of values for these behaviors quantify that a pair is more likely to be CFA or not for the corresponding question. Motivated by these findings, our methods are designed to determine suitable intervals from training data and to use the corresponding success estimates as learning features in prediction algorithms. Tested against a large set of empirical data, we find that our schemes outperform standard algorithms (i.e., without behavioral data) for all datasets and metrics tested. Moreover, the improvement is particularly pronounced when considering the first few course weeks, demonstrating the “early detection” capability of such clickstream data. We also discuss how CFA prediction can be used to depict graphs of the Social Learning Network (SLN) of students, which can help instructors manage courses more effectively.","PeriodicalId":342583,"journal":{"name":"2015 IEEE Conference on Computer Communications (INFOCOM)","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-08-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123967601","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-08-24DOI: 10.1109/INFOCOM.2015.7218509
Bo Wang, Jinlei Jiang, Guangwen Yang
As a widely used programming model and implementation for processing large data sets, MapReduce performs poorly on heterogeneous clusters, which, unfortunately, are common in current computing environments. To deal with the problem, this paper: 1) analyzes the causes of performance degradation and identifies the key one as the large volume of inter-node data transfer resulted from even data distribution among nodes of different computing capabilities, and 2) proposes ActCap, a solution that uses a Markov chain based model to do node-capability-aware data placement for the continuously incoming data. ActCap has been incorporated into Hadoop and evaluated on a 24-node heterogeneous cluster by 13 benchmarks. The experimental results show that ActCap can reduce the percentage of inter-node data transfer from 32.9% to 7.7% and gain an average speedup of 49.8% when compared with Hadoop, and achieve an average speedup of 9.8% when compared with Tarazu, the latest related work.
{"title":"ActCap: Accelerating MapReduce on heterogeneous clusters with capability-aware data placement","authors":"Bo Wang, Jinlei Jiang, Guangwen Yang","doi":"10.1109/INFOCOM.2015.7218509","DOIUrl":"https://doi.org/10.1109/INFOCOM.2015.7218509","url":null,"abstract":"As a widely used programming model and implementation for processing large data sets, MapReduce performs poorly on heterogeneous clusters, which, unfortunately, are common in current computing environments. To deal with the problem, this paper: 1) analyzes the causes of performance degradation and identifies the key one as the large volume of inter-node data transfer resulted from even data distribution among nodes of different computing capabilities, and 2) proposes ActCap, a solution that uses a Markov chain based model to do node-capability-aware data placement for the continuously incoming data. ActCap has been incorporated into Hadoop and evaluated on a 24-node heterogeneous cluster by 13 benchmarks. The experimental results show that ActCap can reduce the percentage of inter-node data transfer from 32.9% to 7.7% and gain an average speedup of 49.8% when compared with Hadoop, and achieve an average speedup of 9.8% when compared with Tarazu, the latest related work.","PeriodicalId":342583,"journal":{"name":"2015 IEEE Conference on Computer Communications (INFOCOM)","volume":"68 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-08-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128733693","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-08-24DOI: 10.1109/INFOCOM.2015.7218593
Jiawei Yuan, Shucheng Yu, Linke Guo
Image search has been widely deployed in many applications for the rich content that images contain. In the era of big data, image search engines have to be hosted in data centers. As a viable solution, outsourcing the image search to public clouds is an economic choice for many small organizations. However, as many images contain sensitive information, e.g., healthcare information and personal faces/locations, directly outsourcing image search services to public clouds obviously raises privacy concerns. With this observation, several attempts are made towards secure image search over encrypted dataset, but they are limited by either search accuracy or search efficiency. In this paper, we propose a lightweight secure image search scheme over encrypted data, namely SEISA. Compared with image search techniques over plaintexts, SEISA only increases about 9% search cost and sacrifices about 3% on search accuracy. SEISA also efficiently supports search access control by employing a novel polynomial based design, which enables data owners to define who can search a specific image. Furthermore, we design a secure k-means outsourcing algorithm that significantly saves the data owner's cost. To demonstrate SEISA's performance, we implement a prototype of SEISA on Amazon EC2 cloud over a dataset with 10 million images.
{"title":"SEISA: Secure and efficient encrypted image search with access control","authors":"Jiawei Yuan, Shucheng Yu, Linke Guo","doi":"10.1109/INFOCOM.2015.7218593","DOIUrl":"https://doi.org/10.1109/INFOCOM.2015.7218593","url":null,"abstract":"Image search has been widely deployed in many applications for the rich content that images contain. In the era of big data, image search engines have to be hosted in data centers. As a viable solution, outsourcing the image search to public clouds is an economic choice for many small organizations. However, as many images contain sensitive information, e.g., healthcare information and personal faces/locations, directly outsourcing image search services to public clouds obviously raises privacy concerns. With this observation, several attempts are made towards secure image search over encrypted dataset, but they are limited by either search accuracy or search efficiency. In this paper, we propose a lightweight secure image search scheme over encrypted data, namely SEISA. Compared with image search techniques over plaintexts, SEISA only increases about 9% search cost and sacrifices about 3% on search accuracy. SEISA also efficiently supports search access control by employing a novel polynomial based design, which enables data owners to define who can search a specific image. Furthermore, we design a secure k-means outsourcing algorithm that significantly saves the data owner's cost. To demonstrate SEISA's performance, we implement a prototype of SEISA on Amazon EC2 cloud over a dataset with 10 million images.","PeriodicalId":342583,"journal":{"name":"2015 IEEE Conference on Computer Communications (INFOCOM)","volume":"102 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-08-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128823536","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-08-24DOI: 10.1109/INFOCOM.2015.7218402
Weichao Li, Ricky K. P. Mok, Daoyuan Wu, R. Chang
As most of mobile apps rely on network connections for their operations, measuring and understanding the performance of mobile networks is becoming very important for end users and operators. Despite the availability of many measurement apps, their measurement accuracy has not received sufficient scrutiny. In this paper, we appraise the accuracy of smartphone-based network performance measurement using the Android platform and the network round-trip time as the metric. We use a multiple-sniffer testbed to overcome the challenge of obtaining a complete trace for acquiring the required timestamps. Our experiment results show that the RTTs measured by the apps are all inflated, ranging from a few milliseconds (ms) to tens of milliseconds. Moreover, the 95% confidence interval can be as high as 2.4ms. A finer-grained analysis reveals that the delay inflation can be introduced both in the Dalvik VM (DVM) and below the Linux kernel. The in-DVM overhead can be mitigated but the other cannot be. Finally, we propose and implement a native app which uses HTTP messages for network measurement, and the delay inflation can be kept under 5ms for almost all cases.
由于大多数移动应用程序的运行都依赖于网络连接,因此测量和理解移动网络的性能对最终用户和运营商来说变得非常重要。尽管有许多测量应用程序,但它们的测量精度还没有得到足够的审查。在本文中,我们使用Android平台,以网络往返时间为度量标准,评估了基于智能手机的网络性能测量的准确性。我们使用多嗅探器测试平台来克服获取所需时间戳的完整跟踪的挑战。我们的实验结果表明,应用程序测量的rtt都被夸大了,范围从几毫秒(ms)到几十毫秒。此外,95%置信区间可高达2.4ms。细粒度的分析表明,延迟膨胀可以在Dalvik VM (DVM)和Linux内核下面引入。可以减少dvm内的开销,但不能减少其他开销。最后,我们提出并实现了一个使用HTTP消息进行网络测量的本地应用程序,在几乎所有情况下延迟膨胀都可以保持在5ms以下。
{"title":"On the accuracy of smartphone-based mobile network measurement","authors":"Weichao Li, Ricky K. P. Mok, Daoyuan Wu, R. Chang","doi":"10.1109/INFOCOM.2015.7218402","DOIUrl":"https://doi.org/10.1109/INFOCOM.2015.7218402","url":null,"abstract":"As most of mobile apps rely on network connections for their operations, measuring and understanding the performance of mobile networks is becoming very important for end users and operators. Despite the availability of many measurement apps, their measurement accuracy has not received sufficient scrutiny. In this paper, we appraise the accuracy of smartphone-based network performance measurement using the Android platform and the network round-trip time as the metric. We use a multiple-sniffer testbed to overcome the challenge of obtaining a complete trace for acquiring the required timestamps. Our experiment results show that the RTTs measured by the apps are all inflated, ranging from a few milliseconds (ms) to tens of milliseconds. Moreover, the 95% confidence interval can be as high as 2.4ms. A finer-grained analysis reveals that the delay inflation can be introduced both in the Dalvik VM (DVM) and below the Linux kernel. The in-DVM overhead can be mitigated but the other cannot be. Finally, we propose and implement a native app which uses HTTP messages for network measurement, and the delay inflation can be kept under 5ms for almost all cases.","PeriodicalId":342583,"journal":{"name":"2015 IEEE Conference on Computer Communications (INFOCOM)","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-08-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117020394","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-08-24DOI: 10.1109/INFOCOM.2015.7218479
Doron Zarchy, David Hay, Michael Schapira
Cloud computing platforms provide computational resources (CPU, storage, etc.) for running users' applications. Often, the same application can be implemented in various ways, each with different resource requirements. Taking advantage of this flexibility when allocating resources to users can both greatly benefit users and lead to much better global resource utilization. We develop a framework for fair resource allocation that captures such implementation tradeoffs by allowing users to submit multiple “resource demands”. We present and analyze two mechanisms for fairly allocating resources in such environments: the Lexicographically-Max-Min-Fair (LMMF) mechanism and the Nash-Bargaining (NB) mechanism. We prove that NB has many desirable properties, including Pareto optimality and envy freeness, in a broad variety of environments whereas the seemingly less appealing LMMF fares better, and is even immune to manipulations, in restricted settings of interest.
{"title":"Capturing resource tradeoffs in fair multi-resource allocation","authors":"Doron Zarchy, David Hay, Michael Schapira","doi":"10.1109/INFOCOM.2015.7218479","DOIUrl":"https://doi.org/10.1109/INFOCOM.2015.7218479","url":null,"abstract":"Cloud computing platforms provide computational resources (CPU, storage, etc.) for running users' applications. Often, the same application can be implemented in various ways, each with different resource requirements. Taking advantage of this flexibility when allocating resources to users can both greatly benefit users and lead to much better global resource utilization. We develop a framework for fair resource allocation that captures such implementation tradeoffs by allowing users to submit multiple “resource demands”. We present and analyze two mechanisms for fairly allocating resources in such environments: the Lexicographically-Max-Min-Fair (LMMF) mechanism and the Nash-Bargaining (NB) mechanism. We prove that NB has many desirable properties, including Pareto optimality and envy freeness, in a broad variety of environments whereas the seemingly less appealing LMMF fares better, and is even immune to manipulations, in restricted settings of interest.","PeriodicalId":342583,"journal":{"name":"2015 IEEE Conference on Computer Communications (INFOCOM)","volume":"633 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-08-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116206682","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-08-24DOI: 10.1109/INFOCOM.2015.7218639
Chenshu Wu, Zheng Yang, Chaowei Xiao, Chaofan Yang, Yunhao Liu, M. Liu
The proliferation of mobile computing has prompted WiFi-based indoor localization to be one of the most attractive and promising techniques for ubiquitous applications. A primary concern for these technologies to be fully practical is to combat harsh indoor environmental dynamics, especially for long-term deployment. Despite numerous research on WiFi fingerprint-based localization, the problem of radio map adaptation has not been sufficiently studied and remains open. In this work, we propose AcMu, an automatic and continuous radio map self-updating service for wireless indoor localization that exploits the static behaviors of mobile devices. By accurately pinpointing mobile devices with a novel trajectory matching algorithm, we employ them as mobile reference points to collect real-time RSS samples when they are static. With these fresh reference data, we adapt the complete radio map by learning an underlying relationship of RSS dependency between different locations, which is expected to be relatively constant over time. Extensive experiments for 20 days across 6 months demonstrate that AcMu effectively accommodates RSS variations over time and derives accurate prediction of fresh radio map with average errors of less than 5dB. Moreover, AcMu provides 2x improvement on localization accuracy by maintaining an up-to-date radio map.
{"title":"Static power of mobile devices: Self-updating radio maps for wireless indoor localization","authors":"Chenshu Wu, Zheng Yang, Chaowei Xiao, Chaofan Yang, Yunhao Liu, M. Liu","doi":"10.1109/INFOCOM.2015.7218639","DOIUrl":"https://doi.org/10.1109/INFOCOM.2015.7218639","url":null,"abstract":"The proliferation of mobile computing has prompted WiFi-based indoor localization to be one of the most attractive and promising techniques for ubiquitous applications. A primary concern for these technologies to be fully practical is to combat harsh indoor environmental dynamics, especially for long-term deployment. Despite numerous research on WiFi fingerprint-based localization, the problem of radio map adaptation has not been sufficiently studied and remains open. In this work, we propose AcMu, an automatic and continuous radio map self-updating service for wireless indoor localization that exploits the static behaviors of mobile devices. By accurately pinpointing mobile devices with a novel trajectory matching algorithm, we employ them as mobile reference points to collect real-time RSS samples when they are static. With these fresh reference data, we adapt the complete radio map by learning an underlying relationship of RSS dependency between different locations, which is expected to be relatively constant over time. Extensive experiments for 20 days across 6 months demonstrate that AcMu effectively accommodates RSS variations over time and derives accurate prediction of fresh radio map with average errors of less than 5dB. Moreover, AcMu provides 2x improvement on localization accuracy by maintaining an up-to-date radio map.","PeriodicalId":342583,"journal":{"name":"2015 IEEE Conference on Computer Communications (INFOCOM)","volume":"78 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-08-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127010647","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-08-24DOI: 10.1109/INFOCOM.2015.7218361
E. Baik, A. Pande, Chris Stover, P. Mohapatra
The quality of mobile videos is usually quantified through the Quality of Experience (QoE), which is usually based on network QoS measurements, user engagement, or post-view subjective scores. Such quantifications are not adequate for real-time evaluation. They cannot provide on-line feedback for improvement of visual acuity, which represents the actual viewing experience of the end user. We present a visual acuity framework which makes fast online computations in a mobile device and provide an accurate estimate of mobile video QoE. We identify and study the three main causes that impact visual acuity in mobile videos: spatial distortions, types of buffering and resolution changes. Each of them can be accurately modeled using our framework. We use machine learning techniques to build a prediction model for visual acuity, which depicts more than 78% accuracy. We present an experimental implementation on iPhone 4 and 5s to show that the proposed visual acuity framework is feasible to deploy in mobile devices. Using a data corpus of over 2852 mobile video clips for the experiments, we validate the proposed framework.
{"title":"Video acuity assessment in mobile devices","authors":"E. Baik, A. Pande, Chris Stover, P. Mohapatra","doi":"10.1109/INFOCOM.2015.7218361","DOIUrl":"https://doi.org/10.1109/INFOCOM.2015.7218361","url":null,"abstract":"The quality of mobile videos is usually quantified through the Quality of Experience (QoE), which is usually based on network QoS measurements, user engagement, or post-view subjective scores. Such quantifications are not adequate for real-time evaluation. They cannot provide on-line feedback for improvement of visual acuity, which represents the actual viewing experience of the end user. We present a visual acuity framework which makes fast online computations in a mobile device and provide an accurate estimate of mobile video QoE. We identify and study the three main causes that impact visual acuity in mobile videos: spatial distortions, types of buffering and resolution changes. Each of them can be accurately modeled using our framework. We use machine learning techniques to build a prediction model for visual acuity, which depicts more than 78% accuracy. We present an experimental implementation on iPhone 4 and 5s to show that the proposed visual acuity framework is feasible to deploy in mobile devices. Using a data corpus of over 2852 mobile video clips for the experiments, we validate the proposed framework.","PeriodicalId":342583,"journal":{"name":"2015 IEEE Conference on Computer Communications (INFOCOM)","volume":"437 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-08-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126125925","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-08-24DOI: 10.1109/INFOCOM.2015.7218394
Stefanie Roos, T. Strufe
Virtual overlays generate topologies for greedy routing, like rings or hypercubes, on connectivity restricted networks. They have been proposed to achieve efficient content discovery in the Darknet mode of Freenet, for instance, which provides a private and secure communication platform for dissidents and whistle-blowers. Virtual overlays create tunnels between nodes with neighboring addresses in the topology. The routing performance hence is directly related to the length of the tunnels, which have to be set up and maintained at the cost of communication overhead in the absence of an underlying routing protocol. In this paper, we show the impossibility to efficiently maintain sufficiently short tunnels. Specifically, we prove that in a dynamic network either the maintenance or the routing eventually exceeds polylog cost in the number of participants. Our simulations additionally show that the length of the tunnels increases fast if standard maintenance protocols are applied. Thus, we show that virtual overlays can only offer efficient routing at the price of high maintenance costs.
{"title":"On the impossibility of efficient self-stabilization in virtual overlays with churn","authors":"Stefanie Roos, T. Strufe","doi":"10.1109/INFOCOM.2015.7218394","DOIUrl":"https://doi.org/10.1109/INFOCOM.2015.7218394","url":null,"abstract":"Virtual overlays generate topologies for greedy routing, like rings or hypercubes, on connectivity restricted networks. They have been proposed to achieve efficient content discovery in the Darknet mode of Freenet, for instance, which provides a private and secure communication platform for dissidents and whistle-blowers. Virtual overlays create tunnels between nodes with neighboring addresses in the topology. The routing performance hence is directly related to the length of the tunnels, which have to be set up and maintained at the cost of communication overhead in the absence of an underlying routing protocol. In this paper, we show the impossibility to efficiently maintain sufficiently short tunnels. Specifically, we prove that in a dynamic network either the maintenance or the routing eventually exceeds polylog cost in the number of participants. Our simulations additionally show that the length of the tunnels increases fast if standard maintenance protocols are applied. Thus, we show that virtual overlays can only offer efficient routing at the price of high maintenance costs.","PeriodicalId":342583,"journal":{"name":"2015 IEEE Conference on Computer Communications (INFOCOM)","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-08-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127498038","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-08-24DOI: 10.1109/INFOCOM.2015.7218611
Yang Tian, Kaigui Bian, G. Shen, Xiaochen Liu, Xiaoguang Li, T. Moscibroda
The popularity of QR code clearly indicates the strong demand of users to acquire (or pull) further information from interested sources (e.g., a poster) in the physical world. However, existing information pulling practices such as a mobile search or QR code scanning incur heavy user involvement to identify the targeted posters. Meanwhile, businesses (e.g., advertisers) are also interested to learn about the behaviors of potential customers such as where, when, and how users show interests in their offerings. Unfortunately, little such context information are provided by existing information pulling systems. In this paper, we present Contextual-Code (C-Code) - an information pulling system that greatly relieves users' efforts in pulling information from targeted posters, and in the meantime provides rich context information of user behavior to businesses. C-Code leverages the rich contextual information captured by the smartphone sensors to automatically disambiguate information sources in different contexts. It assigns simple codes (e.g., a character) to sources whose contexts are not discriminating enough. To pull the information from an interested source, users only need to input the simple code shown on the targeted source. Our experiments demonstrate the effectiveness of C-Code design. Users can effectively and uniquely identify targeted information sources with an average accuracy over 90%.
{"title":"Contextual-code: Simplifying information pulling from targeted sources in physical world","authors":"Yang Tian, Kaigui Bian, G. Shen, Xiaochen Liu, Xiaoguang Li, T. Moscibroda","doi":"10.1109/INFOCOM.2015.7218611","DOIUrl":"https://doi.org/10.1109/INFOCOM.2015.7218611","url":null,"abstract":"The popularity of QR code clearly indicates the strong demand of users to acquire (or pull) further information from interested sources (e.g., a poster) in the physical world. However, existing information pulling practices such as a mobile search or QR code scanning incur heavy user involvement to identify the targeted posters. Meanwhile, businesses (e.g., advertisers) are also interested to learn about the behaviors of potential customers such as where, when, and how users show interests in their offerings. Unfortunately, little such context information are provided by existing information pulling systems. In this paper, we present Contextual-Code (C-Code) - an information pulling system that greatly relieves users' efforts in pulling information from targeted posters, and in the meantime provides rich context information of user behavior to businesses. C-Code leverages the rich contextual information captured by the smartphone sensors to automatically disambiguate information sources in different contexts. It assigns simple codes (e.g., a character) to sources whose contexts are not discriminating enough. To pull the information from an interested source, users only need to input the simple code shown on the targeted source. Our experiments demonstrate the effectiveness of C-Code design. Users can effectively and uniquely identify targeted information sources with an average accuracy over 90%.","PeriodicalId":342583,"journal":{"name":"2015 IEEE Conference on Computer Communications (INFOCOM)","volume":"81 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-08-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126374421","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}