Mobile devices are ubiquitous but their resources are limited. However, they must be capable to run computationally intensive software, for example for image stitching, face recognition, and simulation-based artificial intelligence. As a solution, mobile devices can use nearby resources to offload computation. Distributed computing environments provide such features but ignore the nature of mobile devices, such as mobility, network, or battery changes. This leads to long delays, which reduce the quality of experience for the user. In this paper, we present Mobile Tasklets, a mobile extension of our distributed computing middleware. The design of Mobile Tasklets includes context monitoring, context-aware scheduling mechanisms, and an Android API for application integration. We identify the challenges of the integration of mobile devices into our distributed computing environment. We evaluate Mobile Tasklets in a real-world testbed with different context settings.
{"title":"Using quality of computation to enhance quality of service in mobile computing systems","authors":"Dominik Schäfer, Janick Edinger, Tobias Borlinghaus, Justin Mazzola Paluska, C. Becker","doi":"10.1109/IWQoS.2017.7969146","DOIUrl":"https://doi.org/10.1109/IWQoS.2017.7969146","url":null,"abstract":"Mobile devices are ubiquitous but their resources are limited. However, they must be capable to run computationally intensive software, for example for image stitching, face recognition, and simulation-based artificial intelligence. As a solution, mobile devices can use nearby resources to offload computation. Distributed computing environments provide such features but ignore the nature of mobile devices, such as mobility, network, or battery changes. This leads to long delays, which reduce the quality of experience for the user. In this paper, we present Mobile Tasklets, a mobile extension of our distributed computing middleware. The design of Mobile Tasklets includes context monitoring, context-aware scheduling mechanisms, and an Android API for application integration. We identify the challenges of the integration of mobile devices into our distributed computing environment. We evaluate Mobile Tasklets in a real-world testbed with different context settings.","PeriodicalId":422861,"journal":{"name":"2017 IEEE/ACM 25th International Symposium on Quality of Service (IWQoS)","volume":"263 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-06-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116239032","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-06-14DOI: 10.1109/IWQoS.2017.7969145
Xun Shao, H. Asaeda
Recently, as the development of content-centric networking and Telco-CDNs, inter-domain cache sharing has attracted increased attention. With inter-domain cache sharing, ISPs can further reduce transit cost and improve the quality-of-service (QoS) by accessing their neighbors' caching devices. Currently, cache sharing is limited to free-peering ISPs. Enabling eyeball ISPs to share caches with their transit providers has substantial benefits, but both technical and economic challenges must be addressed. In this study, we consider the inter-domain cache-sharing problem as a double-sided market. In this market, eyeball ISPs receive reimbursement from transit providers for sharing their local caches, while transit ISPs obtain content from the caches of eyeball ISPs at lower cost than by obtaining content from upper tier ISPs. We propose an efficient cooperative mechanism based on Nash bargaining solution to address resource allocation issues in the double-sided market. The proposed mechanism can satisfy both the technical and economic properties desired.
{"title":"A cooperative mechanism for efficient inter-domain in-network cache sharing","authors":"Xun Shao, H. Asaeda","doi":"10.1109/IWQoS.2017.7969145","DOIUrl":"https://doi.org/10.1109/IWQoS.2017.7969145","url":null,"abstract":"Recently, as the development of content-centric networking and Telco-CDNs, inter-domain cache sharing has attracted increased attention. With inter-domain cache sharing, ISPs can further reduce transit cost and improve the quality-of-service (QoS) by accessing their neighbors' caching devices. Currently, cache sharing is limited to free-peering ISPs. Enabling eyeball ISPs to share caches with their transit providers has substantial benefits, but both technical and economic challenges must be addressed. In this study, we consider the inter-domain cache-sharing problem as a double-sided market. In this market, eyeball ISPs receive reimbursement from transit providers for sharing their local caches, while transit ISPs obtain content from the caches of eyeball ISPs at lower cost than by obtaining content from upper tier ISPs. We propose an efficient cooperative mechanism based on Nash bargaining solution to address resource allocation issues in the double-sided market. The proposed mechanism can satisfy both the technical and economic properties desired.","PeriodicalId":422861,"journal":{"name":"2017 IEEE/ACM 25th International Symposium on Quality of Service (IWQoS)","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-06-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115003022","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-06-14DOI: 10.1109/IWQoS.2017.7969114
Shang Gao, Zhe Peng, Bin Xiao, Qingjun Xiao, Yubo Song
Energy saving solutions on smartphone devices can greatly extend a smartphone's lasting time. However, today's push services require keep-alive connections to notify users of incoming messages, which cause costly energy consuming and drain a smartphone's battery quickly in cellular communications. Most keep-alive connections force smartphones to frequently send heartbeat packets that create additional energy-consuming radio-tails. No previous work has addressed the high-energy consumption of keep-alive connections in smartphones push services. In this paper, we propose Single Connection Proxy (SCoP) system based on fog computing to merge multiple keep-alive connections into one, and push messages in an energy-saving way. The new design of SCoP can satisfy a predefined message delay constraint and minimize the smartphone energy consumption for both real-time and delay-tolerant apps. SCoP is transparent to both smartphones and push servers, which does not need any changes on today's push service framework. Theoretical analysis shows that, given the Poisson distribution of incoming messages, SCoP can reduce the energy consumption by up to 50%. We implement SCoP system, including both the local proxy on the smartphone and remote proxy on the “Fog”. Experimental results show that the proposed system consumes 30% less energy than the current push service for real-time apps, and 60% less energy for delay-tolerant apps.
{"title":"SCoP: Smartphone energy saving by merging push services in Fog computing","authors":"Shang Gao, Zhe Peng, Bin Xiao, Qingjun Xiao, Yubo Song","doi":"10.1109/IWQoS.2017.7969114","DOIUrl":"https://doi.org/10.1109/IWQoS.2017.7969114","url":null,"abstract":"Energy saving solutions on smartphone devices can greatly extend a smartphone's lasting time. However, today's push services require keep-alive connections to notify users of incoming messages, which cause costly energy consuming and drain a smartphone's battery quickly in cellular communications. Most keep-alive connections force smartphones to frequently send heartbeat packets that create additional energy-consuming radio-tails. No previous work has addressed the high-energy consumption of keep-alive connections in smartphones push services. In this paper, we propose Single Connection Proxy (SCoP) system based on fog computing to merge multiple keep-alive connections into one, and push messages in an energy-saving way. The new design of SCoP can satisfy a predefined message delay constraint and minimize the smartphone energy consumption for both real-time and delay-tolerant apps. SCoP is transparent to both smartphones and push servers, which does not need any changes on today's push service framework. Theoretical analysis shows that, given the Poisson distribution of incoming messages, SCoP can reduce the energy consumption by up to 50%. We implement SCoP system, including both the local proxy on the smartphone and remote proxy on the “Fog”. Experimental results show that the proposed system consumes 30% less energy than the current push service for real-time apps, and 60% less energy for delay-tolerant apps.","PeriodicalId":422861,"journal":{"name":"2017 IEEE/ACM 25th International Symposium on Quality of Service (IWQoS)","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-06-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130083915","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-06-14DOI: 10.1109/IWQoS.2017.7969130
Shenglin Zhang, Weibin Meng, Jiahao Bu, Sen Yang, Y. Liu, Dan Pei, Jun Xu, Yu Chen, Hui Dong, Xianping Qu, Lei Song
Syslogs on switches are a rich source of information for both post-mortem diagnosis and proactive prediction of switch failures in a datacenter network. However, such information can be effectively extracted only through proper processing of syslogs, e.g., using suitable machine learning techniques. A common approach to syslog processing is to extract (i.e., build) templates from historical syslog messages and then match syslog messages to these templates. However, existing template extraction techniques either have low accuracies in learning the “correct” set of templates, or does not support incremental learning in the sense the entire set of templates has to be rebuilt (from processing all historical syslog messages again) when a new template is to be added, which is prohibitively expensive computationally if used for a large datacenter network. To address these two problems, we propose a frequent template tree (FT-tree) model in which frequent combinations of (syslog) words are identified and then used as message templates. FT-tree empirically extracts message templates more accurately than existing approaches, and naturally supports incremental learning. To compare the performance of FT-tree and three other template learning techniques, we experimented them on two-years' worth of failure tickets and syslogs collected from switches deployed across 10+ datacenters of a tier-1 cloud service provider. The experiments demonstrated that FT-tree improved the estimation/prediction accuracy (as measured by F1) by 155% to 188%, and the computational efficiency by 117 to 730 times.
{"title":"Syslog processing for switch failure diagnosis and prediction in datacenter networks","authors":"Shenglin Zhang, Weibin Meng, Jiahao Bu, Sen Yang, Y. Liu, Dan Pei, Jun Xu, Yu Chen, Hui Dong, Xianping Qu, Lei Song","doi":"10.1109/IWQoS.2017.7969130","DOIUrl":"https://doi.org/10.1109/IWQoS.2017.7969130","url":null,"abstract":"Syslogs on switches are a rich source of information for both post-mortem diagnosis and proactive prediction of switch failures in a datacenter network. However, such information can be effectively extracted only through proper processing of syslogs, e.g., using suitable machine learning techniques. A common approach to syslog processing is to extract (i.e., build) templates from historical syslog messages and then match syslog messages to these templates. However, existing template extraction techniques either have low accuracies in learning the “correct” set of templates, or does not support incremental learning in the sense the entire set of templates has to be rebuilt (from processing all historical syslog messages again) when a new template is to be added, which is prohibitively expensive computationally if used for a large datacenter network. To address these two problems, we propose a frequent template tree (FT-tree) model in which frequent combinations of (syslog) words are identified and then used as message templates. FT-tree empirically extracts message templates more accurately than existing approaches, and naturally supports incremental learning. To compare the performance of FT-tree and three other template learning techniques, we experimented them on two-years' worth of failure tickets and syslogs collected from switches deployed across 10+ datacenters of a tier-1 cloud service provider. The experiments demonstrated that FT-tree improved the estimation/prediction accuracy (as measured by F1) by 155% to 188%, and the computational efficiency by 117 to 730 times.","PeriodicalId":422861,"journal":{"name":"2017 IEEE/ACM 25th International Symposium on Quality of Service (IWQoS)","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-06-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124883039","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-06-14DOI: 10.1109/IWQoS.2017.7969180
Jia Zhao, M. Ma, Wei Gong, Lei Zhang, Yifei Zhu, Jiangchuan Liu
There has been explosive growth in Mobile Personal Livestreaming (MPL) market since 2016. MPL services are booming not only because they introduce the popular live content by spontaneous and personalized broadcasters, but also because they are deliberately designed to be the innovative social networking service (SNS) platforms. The latter is a very important aspect that distinguishes MPL from the traditional livestreaming services. In this paper, we study the social networking of a large scale MPL service “Inke” (with more than 200 million registered users, 15 million daily active users) in China. By analyzing the dataset we crawl and the features of Inke app, we show that the social media stickiness of Inke comes from three aspects: the follower-followee model, the virtual-gift-based incentive mechanism, and the multi-perspective interactivity between broadcasters and viewers. First, Inke introduces the follower-followee model rather than the traditional broadcaster-viewer model, and every user in Inke can be a broadcaster. This makes MPL have some different patterns from both the traditional livestreaming serives and SNS platforms. Second, Inke use virtual gift giving and user ranking as its incentive mechanism. Our measurement results show that this mechanism can indeed enhance user stickiness. Furthermore, Inke incorporates a variety of features during broadcasting to strengthen interactivity. The insight we gain in this paper has important implications for both existing and future designs.
{"title":"Social media stickiness in Mobile Personal Livestreaming service","authors":"Jia Zhao, M. Ma, Wei Gong, Lei Zhang, Yifei Zhu, Jiangchuan Liu","doi":"10.1109/IWQoS.2017.7969180","DOIUrl":"https://doi.org/10.1109/IWQoS.2017.7969180","url":null,"abstract":"There has been explosive growth in Mobile Personal Livestreaming (MPL) market since 2016. MPL services are booming not only because they introduce the popular live content by spontaneous and personalized broadcasters, but also because they are deliberately designed to be the innovative social networking service (SNS) platforms. The latter is a very important aspect that distinguishes MPL from the traditional livestreaming services. In this paper, we study the social networking of a large scale MPL service “Inke” (with more than 200 million registered users, 15 million daily active users) in China. By analyzing the dataset we crawl and the features of Inke app, we show that the social media stickiness of Inke comes from three aspects: the follower-followee model, the virtual-gift-based incentive mechanism, and the multi-perspective interactivity between broadcasters and viewers. First, Inke introduces the follower-followee model rather than the traditional broadcaster-viewer model, and every user in Inke can be a broadcaster. This makes MPL have some different patterns from both the traditional livestreaming serives and SNS platforms. Second, Inke use virtual gift giving and user ranking as its incentive mechanism. Our measurement results show that this mechanism can indeed enhance user stickiness. Furthermore, Inke incorporates a variety of features during broadcasting to strengthen interactivity. The insight we gain in this paper has important implications for both existing and future designs.","PeriodicalId":422861,"journal":{"name":"2017 IEEE/ACM 25th International Symposium on Quality of Service (IWQoS)","volume":"54 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-06-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114767105","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-06-14DOI: 10.1109/IWQoS.2017.7969172
Tong Zhang, Peng Cheng, Wenxue Cheng, Bo Wang, Fengyuan Ren
The shuffle transfer pattern is widely adopted in today's cluster computing applications and the completion time of each group of transmissions directly affects application performance. Because of the restriction on the number of concurrent threads and the TCP Incast problem, the randomized data fetching strategy is widely employed in this kind of communication in practice. In this paper, to assess the performance of randomized data fetching, we build a general analytical model and define two metrics - link overload probability and K-deviation load balancing probability - to evaluate the degree of link overload and load balancing respectively, since they are closely related to the transfer completion time. Leveraging our model, we theoretically analyze the transfer performance in three typical scenarios and provide recommendations for setting the number of concurrent connections per receiver. Finally, we validate the theoretical analysis as well as the recommendations through extensive simulations.
{"title":"Performance analysis of randomized data fetching in cluster computing","authors":"Tong Zhang, Peng Cheng, Wenxue Cheng, Bo Wang, Fengyuan Ren","doi":"10.1109/IWQoS.2017.7969172","DOIUrl":"https://doi.org/10.1109/IWQoS.2017.7969172","url":null,"abstract":"The shuffle transfer pattern is widely adopted in today's cluster computing applications and the completion time of each group of transmissions directly affects application performance. Because of the restriction on the number of concurrent threads and the TCP Incast problem, the randomized data fetching strategy is widely employed in this kind of communication in practice. In this paper, to assess the performance of randomized data fetching, we build a general analytical model and define two metrics - link overload probability and K-deviation load balancing probability - to evaluate the degree of link overload and load balancing respectively, since they are closely related to the transfer completion time. Leveraging our model, we theoretically analyze the transfer performance in three typical scenarios and provide recommendations for setting the number of concurrent connections per receiver. Finally, we validate the theoretical analysis as well as the recommendations through extensive simulations.","PeriodicalId":422861,"journal":{"name":"2017 IEEE/ACM 25th International Symposium on Quality of Service (IWQoS)","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-06-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122899253","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-06-14DOI: 10.1109/IWQoS.2017.7969149
Silvery Fu, Yifei Zhu, Jiangchuan Liu
In the public cloud market, there has been a constant battle over the billing options of the cloud instances between their providers and their users. The users generally have to pay for the entire billing cycle even on fractional usage. Ideally, the residual life-cycles should be resalable by the users, which demands efficient resource consolidation and multiplexing; otherwise, the revenue and use cases are confined by the transient nature of the instances. This paper presents HARV, a novel cloud service that facilitates the management and trade of cloud instances through a third-party platform to run buyers' tasks. The platform relies on hybrid virtualization, an infrastructure layout integrating both the hypervisor-based virtualization and lightweight containerization. It further incorporates a truthful online auction mechanism for instance trading and resource allocation. Our design achieves efficient resource consolidation with no need for provider-level support, and we have deployed a prototype of HARV on the Amazon EC2 public cloud. Our evaluations on both micro-benchmarks and real-life workloads reveal that applications experience negligible performance overhead when hosted on HARV. Trace-driven simulations further show that HARV can achieve substantial cost savings.
{"title":"HARV: Harnessing hybrid virtualization to improve instance (re)usage in public cloud","authors":"Silvery Fu, Yifei Zhu, Jiangchuan Liu","doi":"10.1109/IWQoS.2017.7969149","DOIUrl":"https://doi.org/10.1109/IWQoS.2017.7969149","url":null,"abstract":"In the public cloud market, there has been a constant battle over the billing options of the cloud instances between their providers and their users. The users generally have to pay for the entire billing cycle even on fractional usage. Ideally, the residual life-cycles should be resalable by the users, which demands efficient resource consolidation and multiplexing; otherwise, the revenue and use cases are confined by the transient nature of the instances. This paper presents HARV, a novel cloud service that facilitates the management and trade of cloud instances through a third-party platform to run buyers' tasks. The platform relies on hybrid virtualization, an infrastructure layout integrating both the hypervisor-based virtualization and lightweight containerization. It further incorporates a truthful online auction mechanism for instance trading and resource allocation. Our design achieves efficient resource consolidation with no need for provider-level support, and we have deployed a prototype of HARV on the Amazon EC2 public cloud. Our evaluations on both micro-benchmarks and real-life workloads reveal that applications experience negligible performance overhead when hosted on HARV. Trace-driven simulations further show that HARV can achieve substantial cost savings.","PeriodicalId":422861,"journal":{"name":"2017 IEEE/ACM 25th International Symposium on Quality of Service (IWQoS)","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-06-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124488751","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-06-14DOI: 10.1109/IWQoS.2017.7969174
Shupeng Zhang, Carol J. Fung, Shaohan Huang, Zhongzhi Luan, D. Qian
Nowadays, systems providing user-oriented services often demonstrate periodic patterns due to the repetitive behaviors from people's daily routines. The monitoring data of such systems are time series of observations that record observed system status at sampled times during each day. The periodic feature and multidimensional character of such monitoring data can be well utilized by anomaly detection algorithms to enhance their detection capability. The data periodicity can be used to provide proactive anomaly prediction capability and the correlation among multidimensional series can provide more accurate results than processing the observations separately. However, existing anomaly detection methods only handle one dimensional series and do not consider the data periodicity. In addition, they often require sufficient labelled data to train the models before they can be used. In this paper, we present an unsupervised anomaly detection algorithm called Periodic Self-Organizing Maps (PSOM) to detect anomalies in periodic time series. PSOMs can be used to detect anomalies in multidimensional periodic series as well as one dimensional periodic series and aperiodic series. Our real data evaluation shows that the PSOM outperforms other supervised methods such as SARIMA and Holt-Winters method.
{"title":"PSOM: Periodic Self-Organizing Maps for unsupervised anomaly detection in periodic time series","authors":"Shupeng Zhang, Carol J. Fung, Shaohan Huang, Zhongzhi Luan, D. Qian","doi":"10.1109/IWQoS.2017.7969174","DOIUrl":"https://doi.org/10.1109/IWQoS.2017.7969174","url":null,"abstract":"Nowadays, systems providing user-oriented services often demonstrate periodic patterns due to the repetitive behaviors from people's daily routines. The monitoring data of such systems are time series of observations that record observed system status at sampled times during each day. The periodic feature and multidimensional character of such monitoring data can be well utilized by anomaly detection algorithms to enhance their detection capability. The data periodicity can be used to provide proactive anomaly prediction capability and the correlation among multidimensional series can provide more accurate results than processing the observations separately. However, existing anomaly detection methods only handle one dimensional series and do not consider the data periodicity. In addition, they often require sufficient labelled data to train the models before they can be used. In this paper, we present an unsupervised anomaly detection algorithm called Periodic Self-Organizing Maps (PSOM) to detect anomalies in periodic time series. PSOMs can be used to detect anomalies in multidimensional periodic series as well as one dimensional periodic series and aperiodic series. Our real data evaluation shows that the PSOM outperforms other supervised methods such as SARIMA and Holt-Winters method.","PeriodicalId":422861,"journal":{"name":"2017 IEEE/ACM 25th International Symposium on Quality of Service (IWQoS)","volume":"70 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-06-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133110847","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-06-14DOI: 10.1109/IWQoS.2017.7969165
Xiaoyan Yin, Yanjiao Chen, Baochun Li
Crowdsourcing leverages the collective intelligence of the massive crowd workers to accomplish tasks in a cost-effective way. On a crowdsourcing platform, it is challenging to assign tasks to workers in an appropriate way due to heterogeneity in both tasks and workers. In this paper, we explore the problem of assigning workers with various skill levels to tasks with different quality requirements and budget constraints. We first formulate the task assignment as a many-to-one matching problem, in which multiple workers are assigned to a task, and the task can be successfully completed only if a minimum quality requirement can be satisfied within its limited budget. Different from traditional task assignment mechanisms which focus on utility maximization for the crowdsourcing platform, our proposed matching framework takes into consideration the preferences of individual crowdsourcers and workers towards each other. We design a novel algorithm that can generate a stable outcome for the many-to-one matching problem with lower and upper bounds (i.e., quality requirement and budget constraint), as well as heterogeneous worker skill levels. Through extensive simulations, we show that the proposed algorithm can greatly improve the success ratio of task accomplishment and worker happiness, when compared with existing algorithms.
{"title":"Task assignment with guaranteed quality for crowdsourcing platforms","authors":"Xiaoyan Yin, Yanjiao Chen, Baochun Li","doi":"10.1109/IWQoS.2017.7969165","DOIUrl":"https://doi.org/10.1109/IWQoS.2017.7969165","url":null,"abstract":"Crowdsourcing leverages the collective intelligence of the massive crowd workers to accomplish tasks in a cost-effective way. On a crowdsourcing platform, it is challenging to assign tasks to workers in an appropriate way due to heterogeneity in both tasks and workers. In this paper, we explore the problem of assigning workers with various skill levels to tasks with different quality requirements and budget constraints. We first formulate the task assignment as a many-to-one matching problem, in which multiple workers are assigned to a task, and the task can be successfully completed only if a minimum quality requirement can be satisfied within its limited budget. Different from traditional task assignment mechanisms which focus on utility maximization for the crowdsourcing platform, our proposed matching framework takes into consideration the preferences of individual crowdsourcers and workers towards each other. We design a novel algorithm that can generate a stable outcome for the many-to-one matching problem with lower and upper bounds (i.e., quality requirement and budget constraint), as well as heterogeneous worker skill levels. Through extensive simulations, we show that the proposed algorithm can greatly improve the success ratio of task accomplishment and worker happiness, when compared with existing algorithms.","PeriodicalId":422861,"journal":{"name":"2017 IEEE/ACM 25th International Symposium on Quality of Service (IWQoS)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-06-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114381002","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-06-14DOI: 10.1109/IWQoS.2017.7969113
Changhua Pei, Youjian Zhao, Yunxin Liu, Kun Tan, Jiansong Zhang, Yuan Meng, Dan Pei
WiFi has become the primary method to access the Internet. However, the WiFi-hop latency, particularly in dense-WiFi environments, is far from satisfactory [1], to support delay-sensitive applications such as Web browsing and VoIP. The WiFi latency mainly comes from two kinds of queues: the host queue and the distributed queue, which is caused by CSMA/CA mechanism when multiple nodes contend for the channel. While the host queue can be easily bypassed using priority scheduling at end-host, the distributed queue is not. Previously, IEEE 802.11e tries to provide priorities in this distributed queue by adjusting the MAC layer parameters, but it does not scale when there are increasing number of delay-sensitive flows. In this paper, we propose and design QAir, a practical solution to reduce WiFi latency of delay-sensitive flows in dense WiFi networks. QAir takes a different approach to transfer this distributed queue to host queue. Consequently, the delay-sensitive flows can bypass the entire queue and their latency can be greatly reduced. QAir works in a distributed manner with no centralized scheduler. We have implemented QAir on commodity WiFi devices. Experimental results show that, compared to the 802.11 DCF baseline, QAir can reduce the average WiFi-hop latency of delay-sensitive flows by 50–75%.
{"title":"Latency-based WiFi congestion control in the air for dense WiFi networks","authors":"Changhua Pei, Youjian Zhao, Yunxin Liu, Kun Tan, Jiansong Zhang, Yuan Meng, Dan Pei","doi":"10.1109/IWQoS.2017.7969113","DOIUrl":"https://doi.org/10.1109/IWQoS.2017.7969113","url":null,"abstract":"WiFi has become the primary method to access the Internet. However, the WiFi-hop latency, particularly in dense-WiFi environments, is far from satisfactory [1], to support delay-sensitive applications such as Web browsing and VoIP. The WiFi latency mainly comes from two kinds of queues: the host queue and the distributed queue, which is caused by CSMA/CA mechanism when multiple nodes contend for the channel. While the host queue can be easily bypassed using priority scheduling at end-host, the distributed queue is not. Previously, IEEE 802.11e tries to provide priorities in this distributed queue by adjusting the MAC layer parameters, but it does not scale when there are increasing number of delay-sensitive flows. In this paper, we propose and design QAir, a practical solution to reduce WiFi latency of delay-sensitive flows in dense WiFi networks. QAir takes a different approach to transfer this distributed queue to host queue. Consequently, the delay-sensitive flows can bypass the entire queue and their latency can be greatly reduced. QAir works in a distributed manner with no centralized scheduler. We have implemented QAir on commodity WiFi devices. Experimental results show that, compared to the 802.11 DCF baseline, QAir can reduce the average WiFi-hop latency of delay-sensitive flows by 50–75%.","PeriodicalId":422861,"journal":{"name":"2017 IEEE/ACM 25th International Symposium on Quality of Service (IWQoS)","volume":"64 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-06-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125106722","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}