Pub Date : 2015-08-24DOI: 10.1109/INFOCOM.2015.7218646
Gil Einziger, B. Fellman, Yaron Kassner
Measurement capabilities are essential for a variety of network applications, such as load balancing, routing, fairness and intrusion detection. These capabilities require large counter arrays in order to monitor the traffic of all network flows. While commodity SRAM memories are capable of operating at line speed, they are too small to accommodate large counter arrays. Previous works suggested estimators, which trade precision for reduced space. However, in order to accurately estimate the largest counter, these methods compromise the accuracy of the rest of the counters. In this work we present a closed form representation of the optimal estimation function. We then introduce Independent Counter Estimation Buckets (ICE-Buckets), a novel algorithm that improves estimation accuracy for all counters. This is achieved by separating the flows to buckets and configuring the optimal estimation function according to each bucket's counter scale. We prove an improved upper bound on the relative error and demonstrate an accuracy improvement of up to 57 times on real Internet packet traces.
{"title":"Independent counter estimation buckets","authors":"Gil Einziger, B. Fellman, Yaron Kassner","doi":"10.1109/INFOCOM.2015.7218646","DOIUrl":"https://doi.org/10.1109/INFOCOM.2015.7218646","url":null,"abstract":"Measurement capabilities are essential for a variety of network applications, such as load balancing, routing, fairness and intrusion detection. These capabilities require large counter arrays in order to monitor the traffic of all network flows. While commodity SRAM memories are capable of operating at line speed, they are too small to accommodate large counter arrays. Previous works suggested estimators, which trade precision for reduced space. However, in order to accurately estimate the largest counter, these methods compromise the accuracy of the rest of the counters. In this work we present a closed form representation of the optimal estimation function. We then introduce Independent Counter Estimation Buckets (ICE-Buckets), a novel algorithm that improves estimation accuracy for all counters. This is achieved by separating the flows to buckets and configuring the optimal estimation function according to each bucket's counter scale. We prove an improved upper bound on the relative error and demonstrate an accuracy improvement of up to 57 times on real Internet packet traces.","PeriodicalId":342583,"journal":{"name":"2015 IEEE Conference on Computer Communications (INFOCOM)","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-08-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115773580","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-08-24DOI: 10.1109/INFOCOM.2015.7218424
Chao Zhang, Chengyu Song, Byoungyoung Lee, Kangjie Lu, William R. Harris, Taesoo Kim, Wenke Lee
Web browsers are one of the most important enduser applications to browse, retrieve, and present Internet resources. Malicious or compromised resources may endanger Web users by hijacking web browsers to execute arbitrary malicious code in the victims' systems. Unfortunately, the widely-adopted Just-In-Time compilation (JIT) optimization technique, which compiles source code to native code at runtime, significantly increases this risk. By exploiting JIT compiled code, attackers can bypass all currently deployed defenses. In this paper, we systematically investigate threats against JIT compiled code, and the challenges of protecting JIT compiled code. We propose a general defense solution, JITScope, to enforce Control-Flow Integrity (CFI) on both statically compiled and JIT compiled code. Our solution furthermore enforces the W⊕X policy on JIT compiled code, preventing the JIT compiled code from being overwritten by attackers. We show that our prototype implementation of JITScope on the popular Firefox web browser introduces a reasonably low performance overhead, while defeating existing real-world control flow hijacking attacks.
{"title":"JITScope: Protecting web users from control-flow hijacking attacks","authors":"Chao Zhang, Chengyu Song, Byoungyoung Lee, Kangjie Lu, William R. Harris, Taesoo Kim, Wenke Lee","doi":"10.1109/INFOCOM.2015.7218424","DOIUrl":"https://doi.org/10.1109/INFOCOM.2015.7218424","url":null,"abstract":"Web browsers are one of the most important enduser applications to browse, retrieve, and present Internet resources. Malicious or compromised resources may endanger Web users by hijacking web browsers to execute arbitrary malicious code in the victims' systems. Unfortunately, the widely-adopted Just-In-Time compilation (JIT) optimization technique, which compiles source code to native code at runtime, significantly increases this risk. By exploiting JIT compiled code, attackers can bypass all currently deployed defenses. In this paper, we systematically investigate threats against JIT compiled code, and the challenges of protecting JIT compiled code. We propose a general defense solution, JITScope, to enforce Control-Flow Integrity (CFI) on both statically compiled and JIT compiled code. Our solution furthermore enforces the W⊕X policy on JIT compiled code, preventing the JIT compiled code from being overwritten by attackers. We show that our prototype implementation of JITScope on the popular Firefox web browser introduces a reasonably low performance overhead, while defeating existing real-world control flow hijacking attacks.","PeriodicalId":342583,"journal":{"name":"2015 IEEE Conference on Computer Communications (INFOCOM)","volume":"175 11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-08-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120956838","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Virtualizing Hadoop clusters provides many benefits, including rapid deployment, on-demand elasticity and secure multi-tenancy. However, a simple migration of Hadoop to a virtualized environment does not fully exploit these benefits. The dual role of a Hadoop worker, acting as both a compute node and a data node, makes it difficult to achieve efficient IO processing, maintain data locality, and exploit resource elasticity in the cloud. We find that decoupling per-node storage from its computation opens up opportunities for IO acceleration, locality improvement, and on-the-fly cluster resizing. To fully exploit these opportunities, we propose StoreApp, a shared storage appliance for virtual Hadoop worker nodes co-located on the same physical host. To completely separate storage from computation and prioritize IO processing, StoreApp pro-actively pushes intermediate data generated by map tasks to the storage node. StoreApp also implements late-binding task creation to take the advantage of prefetched data due to mis-aligned records. Experimental results show that StoreApp achieves up to 61% performance improvement compared to stock Hadoop and resizes the cluster to the (near) optimal degree of parallelism.
{"title":"StoreApp: A shared storage appliance for efficient and scalable virtualized Hadoop clusters","authors":"Yanfei Guo, J. Rao, Dazhao Cheng, Changjun Jiang, Chengzhong Xu, Xiaobo Zhou","doi":"10.1109/INFOCOM.2015.7218427","DOIUrl":"https://doi.org/10.1109/INFOCOM.2015.7218427","url":null,"abstract":"Virtualizing Hadoop clusters provides many benefits, including rapid deployment, on-demand elasticity and secure multi-tenancy. However, a simple migration of Hadoop to a virtualized environment does not fully exploit these benefits. The dual role of a Hadoop worker, acting as both a compute node and a data node, makes it difficult to achieve efficient IO processing, maintain data locality, and exploit resource elasticity in the cloud. We find that decoupling per-node storage from its computation opens up opportunities for IO acceleration, locality improvement, and on-the-fly cluster resizing. To fully exploit these opportunities, we propose StoreApp, a shared storage appliance for virtual Hadoop worker nodes co-located on the same physical host. To completely separate storage from computation and prioritize IO processing, StoreApp pro-actively pushes intermediate data generated by map tasks to the storage node. StoreApp also implements late-binding task creation to take the advantage of prefetched data due to mis-aligned records. Experimental results show that StoreApp achieves up to 61% performance improvement compared to stock Hadoop and resizes the cluster to the (near) optimal degree of parallelism.","PeriodicalId":342583,"journal":{"name":"2015 IEEE Conference on Computer Communications (INFOCOM)","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-08-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124859213","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-08-24DOI: 10.1109/INFOCOM.2015.7218576
Yiting Xia, Xiaoye Steven Sun
Multicast data dissemination is the performance bottleneck for high-performance data analytics applications in cluster computing, because terabytes of data need to be distributed routinely from a single data source to hundreds of computing servers. The state-of-the-art solutions for delivering these massive data sets all rely on application-layer overlays, which suffer from inherent performance limitations. This paper presents Blast, a system for accelerating data analytics applications by optical multicast. Blast leverages passive optical power splitting to duplicate data at line rate on a physical-layer broadcast medium separate from the packet-switched network core. We implement Blast on a small-scale hardware testbed. Multicast transmission can start 33ms after an application issues the request, resulting in a very small control overhead. We evaluate Blast's performance at the scale of thousands of servers through simulation. Using only a 10Gbps optical uplink per rack, Blast achieves upto 102× better performance than the state-of-the-art solutions even when they are used over a non-blocking core network with a 400Gbps uplink per rack.
{"title":"Blast: Accelerating high-performance data analytics applications by optical multicast","authors":"Yiting Xia, Xiaoye Steven Sun","doi":"10.1109/INFOCOM.2015.7218576","DOIUrl":"https://doi.org/10.1109/INFOCOM.2015.7218576","url":null,"abstract":"Multicast data dissemination is the performance bottleneck for high-performance data analytics applications in cluster computing, because terabytes of data need to be distributed routinely from a single data source to hundreds of computing servers. The state-of-the-art solutions for delivering these massive data sets all rely on application-layer overlays, which suffer from inherent performance limitations. This paper presents Blast, a system for accelerating data analytics applications by optical multicast. Blast leverages passive optical power splitting to duplicate data at line rate on a physical-layer broadcast medium separate from the packet-switched network core. We implement Blast on a small-scale hardware testbed. Multicast transmission can start 33ms after an application issues the request, resulting in a very small control overhead. We evaluate Blast's performance at the scale of thousands of servers through simulation. Using only a 10Gbps optical uplink per rack, Blast achieves upto 102× better performance than the state-of-the-art solutions even when they are used over a non-blocking core network with a 400Gbps uplink per rack.","PeriodicalId":342583,"journal":{"name":"2015 IEEE Conference on Computer Communications (INFOCOM)","volume":"87 5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-08-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126100998","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-08-24DOI: 10.1109/INFOCOM.2015.7218510
Yucheng Zhang, Hong Jiang, D. Feng, Wen Xia, Min Fu, Fangting Huang, Yukun Zhou
Data deduplication, a space-efficient and bandwidth-saving technology, plays an important role in bandwidth-efficient data transmission in various data-intensive network and cloud applications. Rabin-based and MAXP-based Content-Defined Chunking (CDC) algorithms, while robust in finding suitable cut-points for chunk-level redundancy elimination, face the key challenges of (1) low chunking throughput that renders the chunking stage the deduplication performance bottleneck and (2) large chunk-size variance that decreases deduplication efficiency. To address these challenges, this paper proposes a new CDC algorithm called the Asymmetric Extremum (AE) algorithm. The main idea behind AE is based on the observation that the extreme value in an asymmetric local range is not likely to be replaced by a new extreme value in dealing with the boundaries-shift problem, which motivates AE's use of asymmetric (rather than symmetric as in MAXP) local range to identify cut-points and simultaneously achieve high chunking throughput and low chunk-size variance. As a result, AE simultaneously addresses the problems of low chunking throughput in MAXP and Rabin and high chunk-size variance in Rabin. The experimental results based on four real-world datasets show that AE improves the throughput performance of the state-of-the-art CDC algorithms by 3x while attaining comparable or higher deduplication efficiency.
{"title":"AE: An Asymmetric Extremum content defined chunking algorithm for fast and bandwidth-efficient data deduplication","authors":"Yucheng Zhang, Hong Jiang, D. Feng, Wen Xia, Min Fu, Fangting Huang, Yukun Zhou","doi":"10.1109/INFOCOM.2015.7218510","DOIUrl":"https://doi.org/10.1109/INFOCOM.2015.7218510","url":null,"abstract":"Data deduplication, a space-efficient and bandwidth-saving technology, plays an important role in bandwidth-efficient data transmission in various data-intensive network and cloud applications. Rabin-based and MAXP-based Content-Defined Chunking (CDC) algorithms, while robust in finding suitable cut-points for chunk-level redundancy elimination, face the key challenges of (1) low chunking throughput that renders the chunking stage the deduplication performance bottleneck and (2) large chunk-size variance that decreases deduplication efficiency. To address these challenges, this paper proposes a new CDC algorithm called the Asymmetric Extremum (AE) algorithm. The main idea behind AE is based on the observation that the extreme value in an asymmetric local range is not likely to be replaced by a new extreme value in dealing with the boundaries-shift problem, which motivates AE's use of asymmetric (rather than symmetric as in MAXP) local range to identify cut-points and simultaneously achieve high chunking throughput and low chunk-size variance. As a result, AE simultaneously addresses the problems of low chunking throughput in MAXP and Rabin and high chunk-size variance in Rabin. The experimental results based on four real-world datasets show that AE improves the throughput performance of the state-of-the-art CDC algorithms by 3x while attaining comparable or higher deduplication efficiency.","PeriodicalId":342583,"journal":{"name":"2015 IEEE Conference on Computer Communications (INFOCOM)","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-08-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125392711","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-08-24DOI: 10.1109/INFOCOM.2015.7218403
S. Kim, Shuai Wang, T. He
Contradicting the widely believed assumption of link independence, recently the phenomenon of reception correlation among nearby receivers has been revealed and exploited for varieties of protocols [3], [8], [17], [21], [23], [24]. However, despite the diversified correlation-aware designs proposed up to date, they commonly suffer from a shortcoming where link correlation is inaccurately measured, which leads them to sub-optimal performance. In this work we propose a general framework for accurate capturing of link correlation, enabling better utilization of the phenomenon for protocols lying on top of it. Our framework uses SINR (Signal to Interference plus Noise Ratio) to detect correlations, followed by modeling the correlations for in-network use. We show that our design is light-weight, both computation and storage-wise. We apply our model to opportunistic routing and network coding on a physical 802.15.4 test-bed for energy savings of 25% and 15%.
{"title":"Exploiting causes and effects of wireless link correlation for better performance","authors":"S. Kim, Shuai Wang, T. He","doi":"10.1109/INFOCOM.2015.7218403","DOIUrl":"https://doi.org/10.1109/INFOCOM.2015.7218403","url":null,"abstract":"Contradicting the widely believed assumption of link independence, recently the phenomenon of reception correlation among nearby receivers has been revealed and exploited for varieties of protocols [3], [8], [17], [21], [23], [24]. However, despite the diversified correlation-aware designs proposed up to date, they commonly suffer from a shortcoming where link correlation is inaccurately measured, which leads them to sub-optimal performance. In this work we propose a general framework for accurate capturing of link correlation, enabling better utilization of the phenomenon for protocols lying on top of it. Our framework uses SINR (Signal to Interference plus Noise Ratio) to detect correlations, followed by modeling the correlations for in-network use. We show that our design is light-weight, both computation and storage-wise. We apply our model to opportunistic routing and network coding on a physical 802.15.4 test-bed for energy savings of 25% and 15%.","PeriodicalId":342583,"journal":{"name":"2015 IEEE Conference on Computer Communications (INFOCOM)","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-08-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126714930","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-08-24DOI: 10.1109/INFOCOM.2015.7218512
Z. Qiu, Juan F. Pérez
Computing clusters have been widely deployed for scientific and engineering applications to support intensive computation and massive data operations. As applications and resources in a cluster are subject to failures, fault-tolerance strategies are commonly adopted, sometimes at the expense of additional delays in job response times, or unnecessarily increasing resource usage. In this paper, we explore concurrent replication with canceling, a fault-tolerance approach where jobs and their replicas are processed concurrently, and the successful completion of either triggers the removals of its replica. We propose a stochastic model to study how this approach affects the cluster service level objectives (SLOs), particularly the offered response time percentiles. In addition to the expected gains in reliability, the proposed model allows us to determine the regions of the utilization where introducing replication with canceling effectively reduces the response times. Moreover, we show how this model can support resource provisioning decisions with reliability and response time guarantees.
{"title":"Enhancing reliability and response times via replication in computing clusters","authors":"Z. Qiu, Juan F. Pérez","doi":"10.1109/INFOCOM.2015.7218512","DOIUrl":"https://doi.org/10.1109/INFOCOM.2015.7218512","url":null,"abstract":"Computing clusters have been widely deployed for scientific and engineering applications to support intensive computation and massive data operations. As applications and resources in a cluster are subject to failures, fault-tolerance strategies are commonly adopted, sometimes at the expense of additional delays in job response times, or unnecessarily increasing resource usage. In this paper, we explore concurrent replication with canceling, a fault-tolerance approach where jobs and their replicas are processed concurrently, and the successful completion of either triggers the removals of its replica. We propose a stochastic model to study how this approach affects the cluster service level objectives (SLOs), particularly the offered response time percentiles. In addition to the expected gains in reliability, the proposed model allows us to determine the regions of the utilization where introducing replication with canceling effectively reduces the response times. Moreover, we show how this model can support resource provisioning decisions with reliability and response time guarantees.","PeriodicalId":342583,"journal":{"name":"2015 IEEE Conference on Computer Communications (INFOCOM)","volume":"47 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-08-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114890076","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-08-24DOI: 10.1109/INFOCOM.2015.7218655
Ruiting Zhou, Zongpeng Li, Chuan Wu
The quintessential problem in a smart grid is the matching between power supply and demand - a perfect balance across the temporal domain, for the stable operation of the power network. Recent studies have revealed the critical role of electricity storage devices, as exemplified by rechargeable batteries and plug-in electric vehicles (PEVs), in helping achieve the balance through power arbitrage. Such potential from batteries and PEVs can not be fully realized without an appropriate economic mechanism that incentivizes energy discharging at times when supply is tight. This work aims at a systematic study of such demand response problem in storage-assisted smart grids through a well-designed online procurement auction mechanism. The long-term social welfare maximization problem is naturally formulated into a linear integer program. We first apply a primal-dual optimization algorithm to decompose the online auction design problem into a series of one-round auction design problems, achieving a small loss in competitive ratio. For the one round auction, we show that social welfare maximization is still NP-hard, and design a primal-dual approximation algorithm that works in concert with the decomposition algorithm. The end result is a truthful power procurement auction that is online, truthful, and 2-competitive in typical scenarios.
{"title":"An online procurement auction for power demand response in storage-assisted smart grids","authors":"Ruiting Zhou, Zongpeng Li, Chuan Wu","doi":"10.1109/INFOCOM.2015.7218655","DOIUrl":"https://doi.org/10.1109/INFOCOM.2015.7218655","url":null,"abstract":"The quintessential problem in a smart grid is the matching between power supply and demand - a perfect balance across the temporal domain, for the stable operation of the power network. Recent studies have revealed the critical role of electricity storage devices, as exemplified by rechargeable batteries and plug-in electric vehicles (PEVs), in helping achieve the balance through power arbitrage. Such potential from batteries and PEVs can not be fully realized without an appropriate economic mechanism that incentivizes energy discharging at times when supply is tight. This work aims at a systematic study of such demand response problem in storage-assisted smart grids through a well-designed online procurement auction mechanism. The long-term social welfare maximization problem is naturally formulated into a linear integer program. We first apply a primal-dual optimization algorithm to decompose the online auction design problem into a series of one-round auction design problems, achieving a small loss in competitive ratio. For the one round auction, we show that social welfare maximization is still NP-hard, and design a primal-dual approximation algorithm that works in concert with the decomposition algorithm. The end result is a truthful power procurement auction that is online, truthful, and 2-competitive in typical scenarios.","PeriodicalId":342583,"journal":{"name":"2015 IEEE Conference on Computer Communications (INFOCOM)","volume":"54 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-08-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116534373","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-08-24DOI: 10.1109/INFOCOM.2015.7218495
Boyuan Sun, Q. Ma, Shanfeng Zhang, Kebin Liu, Yunhao Liu
To meet the demand of more intelligent automation services on smartphone, more and more applications are developed based on users' emotion and personality. It has been a consensus that a relationship exists between personal emotions and usage pattern of smartphone. Most of existing work studies this relationship by learning manually labeled samples collected from smartphone users. The manual labeling process, however, is time-consuming, labor-intensive and money-consuming. To address this issue, we propose iSelf, a system which provides a general service of automatic detection for user's emotions in cold-start conditions with smartphone. Using transfer learning technology, iSelf achieves high accuracy given only a few labeled samples. We also develop a hybrid public/personal inference engine and validation system, so as to make iSelf maintain continuous update. Through extensive experiments, the inferring accuracy is tested about 75% and can be improved increasingly through validation and update.
{"title":"iSelf: Towards cold-start emotion labeling using transfer learning with smartphones","authors":"Boyuan Sun, Q. Ma, Shanfeng Zhang, Kebin Liu, Yunhao Liu","doi":"10.1109/INFOCOM.2015.7218495","DOIUrl":"https://doi.org/10.1109/INFOCOM.2015.7218495","url":null,"abstract":"To meet the demand of more intelligent automation services on smartphone, more and more applications are developed based on users' emotion and personality. It has been a consensus that a relationship exists between personal emotions and usage pattern of smartphone. Most of existing work studies this relationship by learning manually labeled samples collected from smartphone users. The manual labeling process, however, is time-consuming, labor-intensive and money-consuming. To address this issue, we propose iSelf, a system which provides a general service of automatic detection for user's emotions in cold-start conditions with smartphone. Using transfer learning technology, iSelf achieves high accuracy given only a few labeled samples. We also develop a hybrid public/personal inference engine and validation system, so as to make iSelf maintain continuous update. Through extensive experiments, the inferring accuracy is tested about 75% and can be improved increasingly through validation and update.","PeriodicalId":342583,"journal":{"name":"2015 IEEE Conference on Computer Communications (INFOCOM)","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-08-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122759857","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-08-24DOI: 10.1109/INFOCOM.2015.7218451
Sangki Yun, L. Qiu
Motivated by the recent push to deploy LTE in unlicensed spectrum, this paper develops a novel system to enable co-existence between LTE and WiFi. Our approach leverages LTE and WiFi antennas already available on smartphones to let LTE and WiFi transmit together and successfully decode the interfered signals. Our system offers several distinct advantages over existing MIMO work: (i) it can decode all the interfering signals under cross technology interference even when the interfering signals have similar power and occupy similar frequency, (ii) it does not need clean reference signals from either WiFi or LTE transmission, (iii) it can decode interfering WiFi MIMO and LTE transmissions, and (iv) it has a simple yet effective carrier sense mechanism for WiFi to access the medium under interfering LTE signals while avoiding other WiFi transmissions. We use USRP implementation and experiments to show its effectiveness.
{"title":"Supporting WiFi and LTE co-existence","authors":"Sangki Yun, L. Qiu","doi":"10.1109/INFOCOM.2015.7218451","DOIUrl":"https://doi.org/10.1109/INFOCOM.2015.7218451","url":null,"abstract":"Motivated by the recent push to deploy LTE in unlicensed spectrum, this paper develops a novel system to enable co-existence between LTE and WiFi. Our approach leverages LTE and WiFi antennas already available on smartphones to let LTE and WiFi transmit together and successfully decode the interfered signals. Our system offers several distinct advantages over existing MIMO work: (i) it can decode all the interfering signals under cross technology interference even when the interfering signals have similar power and occupy similar frequency, (ii) it does not need clean reference signals from either WiFi or LTE transmission, (iii) it can decode interfering WiFi MIMO and LTE transmissions, and (iv) it has a simple yet effective carrier sense mechanism for WiFi to access the medium under interfering LTE signals while avoiding other WiFi transmissions. We use USRP implementation and experiments to show its effectiveness.","PeriodicalId":342583,"journal":{"name":"2015 IEEE Conference on Computer Communications (INFOCOM)","volume":"37 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-08-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114379319","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}