Epidemic models are used across biological and social sciences, engineering, and computer science, and have had important impact in the study of the dynamics of human disease and computer viruses, but also trends rumors, viral videos, and most recently the spread of fake news on social networks. In this paper, we focus on epidemics propagating on a graph, as introduced by the seminal paper [5]. In particular, we consider so-called SI models (see below for a precise definition) where an infected node can only propagate the infection to its non-infected neighbor, as opposed to the fully mixed models considered in the early literature. This graph-based approach provides a more realistic model, in which the spread of the epidemic is determined by the connectivity of the graph, and accordingly some nodes may play a larger role than others in the spread of the infection.
{"title":"The Cost of Uncertainty in Curing Epidemics","authors":"Jessica Hoffmann, C. Caramanis","doi":"10.1145/3219617.3219622","DOIUrl":"https://doi.org/10.1145/3219617.3219622","url":null,"abstract":"Epidemic models are used across biological and social sciences, engineering, and computer science, and have had important impact in the study of the dynamics of human disease and computer viruses, but also trends rumors, viral videos, and most recently the spread of fake news on social networks. In this paper, we focus on epidemics propagating on a graph, as introduced by the seminal paper [5]. In particular, we consider so-called SI models (see below for a precise definition) where an infected node can only propagate the infection to its non-infected neighbor, as opposed to the fully mixed models considered in the early literature. This graph-based approach provides a more realistic model, in which the spread of the epidemic is determined by the connectivity of the graph, and accordingly some nodes may play a larger role than others in the spread of the infection.","PeriodicalId":210440,"journal":{"name":"Abstracts of the 2018 ACM International Conference on Measurement and Modeling of Computer Systems","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116410976","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Daniel S. Berger, Nathan Beckmann, Mor Harchol-Balter
Many recent caching systems aim to improve miss ratios, but there is no good sense among practitioners of how much further miss ratios can be improved. In other words, should the systems community continue working on this problem? Currently, there is no principled answer to this question. In practice, object sizes often vary by several orders of magnitude, where computing the optimal miss ratio (OPT) is known to be NP-hard. The few known results on caching with variable object sizes provide very weak bounds and are impractical to compute on traces of realistic length. We propose a new method to compute upper and lower bounds on OPT. Our key insight is to represent caching as a min-cost flow problem, hence we call our method the flow-based offline optimal (FOO). We prove that, under simple independence assumptions, FOO's bounds become tight as the number of objects goes to infinity. Indeed, FOO's error over 10M requests of production CDN and storage traces is negligible: at most 0.3%. FOO thus reveals, for the first time, the limits of caching with variable object sizes. While FOO is very accurate, it is computationally impractical on traces with hundreds of millions of requests. We therefore extend FOO to obtain more efficient bounds on OPT, which we call practical flow-based offline optimal (PFOO). We evaluate PFOO on several full production traces and use it to compare OPT to prior online policies. This analysis shows that current caching systems are in fact still far from optimal, suffering 11-43% more cache misses than OPT, whereas the best prior offline bounds suggest that there is essentially no room for improvement.
{"title":"Practical Bounds on Optimal Caching with Variable Object Sizes","authors":"Daniel S. Berger, Nathan Beckmann, Mor Harchol-Balter","doi":"10.1145/3219617.3219627","DOIUrl":"https://doi.org/10.1145/3219617.3219627","url":null,"abstract":"Many recent caching systems aim to improve miss ratios, but there is no good sense among practitioners of how much further miss ratios can be improved. In other words, should the systems community continue working on this problem? Currently, there is no principled answer to this question. In practice, object sizes often vary by several orders of magnitude, where computing the optimal miss ratio (OPT) is known to be NP-hard. The few known results on caching with variable object sizes provide very weak bounds and are impractical to compute on traces of realistic length. We propose a new method to compute upper and lower bounds on OPT. Our key insight is to represent caching as a min-cost flow problem, hence we call our method the flow-based offline optimal (FOO). We prove that, under simple independence assumptions, FOO's bounds become tight as the number of objects goes to infinity. Indeed, FOO's error over 10M requests of production CDN and storage traces is negligible: at most 0.3%. FOO thus reveals, for the first time, the limits of caching with variable object sizes. While FOO is very accurate, it is computationally impractical on traces with hundreds of millions of requests. We therefore extend FOO to obtain more efficient bounds on OPT, which we call practical flow-based offline optimal (PFOO). We evaluate PFOO on several full production traces and use it to compare OPT to prior online policies. This analysis shows that current caching systems are in fact still far from optimal, suffering 11-43% more cache misses than OPT, whereas the best prior offline bounds suggest that there is essentially no room for improvement.","PeriodicalId":210440,"journal":{"name":"Abstracts of the 2018 ACM International Conference on Measurement and Modeling of Computer Systems","volume":"88 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125917414","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
G. Fanti, S. Venkatakrishnan, Surya Bakshi, Bradley Denby, Shruti Bhargava, Andrew K. Miller, P. Viswanath
Recent work has demonstrated significant anonymity vulnerabilities in Bitcoin's networking stack. In particular, the current mechanism for broadcasting Bitcoin transactions allows third-party observers to link transactions to the IP addresses that originated them. This lays the groundwork for low-cost, large-scale deanonymization attacks. In this work, we present Dandelion++, a first-principles defense against large-scale deanonymization attacks with near-optimal information-theoretic guarantees. Dandelion++ builds upon a recent proposal called Dandelion that exhibited similar goals. However, in this paper, we highlight some simplifying assumptions made in Dandelion, and show how they can lead to serious deanonymization attacks when violated. In contrast, Dandelion++ defends against stronger adversaries that are allowed to disobey protocol. Dandleion++ is lightweight, scalable, and completely interoperable with the existing Bitcoin network.We evaluate it through experiments on Bitcoin's mainnet (i.e., the live Bitcoin network) to demonstrate its interoperability and low broadcast latency overhead.
{"title":"Dandelion++: Lightweight Cryptocurrency Networking with Formal Anonymity Guarantees","authors":"G. Fanti, S. Venkatakrishnan, Surya Bakshi, Bradley Denby, Shruti Bhargava, Andrew K. Miller, P. Viswanath","doi":"10.1145/3219617.3219620","DOIUrl":"https://doi.org/10.1145/3219617.3219620","url":null,"abstract":"Recent work has demonstrated significant anonymity vulnerabilities in Bitcoin's networking stack. In particular, the current mechanism for broadcasting Bitcoin transactions allows third-party observers to link transactions to the IP addresses that originated them. This lays the groundwork for low-cost, large-scale deanonymization attacks. In this work, we present Dandelion++, a first-principles defense against large-scale deanonymization attacks with near-optimal information-theoretic guarantees. Dandelion++ builds upon a recent proposal called Dandelion that exhibited similar goals. However, in this paper, we highlight some simplifying assumptions made in Dandelion, and show how they can lead to serious deanonymization attacks when violated. In contrast, Dandelion++ defends against stronger adversaries that are allowed to disobey protocol. Dandleion++ is lightweight, scalable, and completely interoperable with the existing Bitcoin network.We evaluate it through experiments on Bitcoin's mainnet (i.e., the live Bitcoin network) to demonstrate its interoperability and low broadcast latency overhead.","PeriodicalId":210440,"journal":{"name":"Abstracts of the 2018 ACM International Conference on Measurement and Modeling of Computer Systems","volume":"2016 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-05-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114690688","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Parallel and distributed computing systems are foundational to the success of cloud computing and big data analytics. Fork-Join Queueing Networks with Blocking (FJQN/Bs) are natural models for such systems. While engineering solutions have long been made to build and scale such systems, it is challenging to rigorously characterize the throughput performance of ever-growing systems, especially in the presence of heavy-tailed delays. In this paper, we utilize an infinite sequence of FJQN/Bs to study the throughput limit and focus on regularly varying service times with index α>1. We introduce two novel geometric concepts - scaling dimension and extended metric dimension - and show that an infinite sequence of FJQN/Bs is throughput scalable if the extended metric dimension <α-1 and only if the scaling dimension łe α-1. These results provide new insights on the scalability of a rich class of FJQN/Bs.
{"title":"Fork and Join Queueing Networks with Heavy Tails: Scaling Dimension and Throughput Limit","authors":"Yun Zeng, Jian Tan, Cathy H. Xia","doi":"10.1145/3219617.3219668","DOIUrl":"https://doi.org/10.1145/3219617.3219668","url":null,"abstract":"Parallel and distributed computing systems are foundational to the success of cloud computing and big data analytics. Fork-Join Queueing Networks with Blocking (FJQN/Bs) are natural models for such systems. While engineering solutions have long been made to build and scale such systems, it is challenging to rigorously characterize the throughput performance of ever-growing systems, especially in the presence of heavy-tailed delays. In this paper, we utilize an infinite sequence of FJQN/Bs to study the throughput limit and focus on regularly varying service times with index α>1. We introduce two novel geometric concepts - scaling dimension and extended metric dimension - and show that an infinite sequence of FJQN/Bs is throughput scalable if the extended metric dimension <α-1 and only if the scaling dimension łe α-1. These results provide new insights on the scalability of a rich class of FJQN/Bs.","PeriodicalId":210440,"journal":{"name":"Abstracts of the 2018 ACM International Conference on Measurement and Modeling of Computer Systems","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-05-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129884842","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Donggyu Yun, Sumyeong Ahn, A. Proutière, Jinwoo Shin, Yung Yi
We study multi-armed bandit (MAB) problems with additional observations, where in each round, the decision maker selects an arm to play and can also observe rewards of additional arms (within a given budget) by paying certain costs. We propose algorithms that are asymptotic-optimal and order-optimal in their regrets under the settings of stochastic and adversarial rewards, respectively.
{"title":"Multi-armed Bandit with Additional Observations","authors":"Donggyu Yun, Sumyeong Ahn, A. Proutière, Jinwoo Shin, Yung Yi","doi":"10.1145/3219617.3219639","DOIUrl":"https://doi.org/10.1145/3219617.3219639","url":null,"abstract":"We study multi-armed bandit (MAB) problems with additional observations, where in each round, the decision maker selects an arm to play and can also observe rewards of additional arms (within a given budget) by paying certain costs. We propose algorithms that are asymptotic-optimal and order-optimal in their regrets under the settings of stochastic and adversarial rewards, respectively.","PeriodicalId":210440,"journal":{"name":"Abstracts of the 2018 ACM International Conference on Measurement and Modeling of Computer Systems","volume":"60 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-04-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126536410","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Motivated by distributed schedulers that combine the power-of-d-choices with late binding and systems that use replication with cancellation-on-start, we study the performance of the LL(d) policy which assigns a job to a server that currently has the least workload among d randomly selected servers in large-scale homogeneous clusters. We consider general job size distributions and propose a partial integro-differential equation to describe the evolution of the system. This equation relies on the earlier proven ansatz for LL(d) which asserts that the workload distribution of any finite set of queues becomes independent of one another as the number of servers tends to infinity. Based on this equation we propose a fixed point iteration for the limiting workload distribution and study its convergence.
{"title":"On the Power-of-d-choices with Least Loaded Server Selection","authors":"T. Hellemans, B. V. Houdt","doi":"10.1145/3219617.3219664","DOIUrl":"https://doi.org/10.1145/3219617.3219664","url":null,"abstract":"Motivated by distributed schedulers that combine the power-of-d-choices with late binding and systems that use replication with cancellation-on-start, we study the performance of the LL(d) policy which assigns a job to a server that currently has the least workload among d randomly selected servers in large-scale homogeneous clusters. We consider general job size distributions and propose a partial integro-differential equation to describe the evolution of the system. This equation relies on the earlier proven ansatz for LL(d) which asserts that the workload distribution of any finite set of queues becomes independent of one another as the number of servers tends to infinity. Based on this equation we propose a fixed point iteration for the limiting workload distribution and study its convergence.","PeriodicalId":210440,"journal":{"name":"Abstracts of the 2018 ACM International Conference on Measurement and Modeling of Computer Systems","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-02-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126529141","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Stochastic models have been used to assess the performance of computer (and other) systems for many decades. As a direct analysis of large and complex stochastic models is often prohibitive, approximations methods to study their behavior have been devised. One very popular approximation method relies on mean field theory. Its widespread use can be explained by the relative ease involved to define and solve a mean field model in combination with its high accuracy for large systems.
{"title":"A Refined Mean Field Approximation","authors":"Nicolas Gast, B. V. Houdt","doi":"10.1145/3219617.3219663","DOIUrl":"https://doi.org/10.1145/3219617.3219663","url":null,"abstract":"Stochastic models have been used to assess the performance of computer (and other) systems for many decades. As a direct analysis of large and complex stochastic models is often prohibitive, approximations methods to study their behavior have been devised. One very popular approximation method relies on mean field theory. Its widespread use can be explained by the relative ease involved to define and solve a mean field model in combination with its high accuracy for large systems.","PeriodicalId":210440,"journal":{"name":"Abstracts of the 2018 ACM International Conference on Measurement and Modeling of Computer Systems","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-12-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123152572","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We study the problem of scheduling VMs (Virtual Machines) in a distributed server platform, motivated by cloud computing applications. The VMs arrive dynamically over time to the system, and require a certain amount of resources (e.g. memory, CPU, etc) for the duration of their service. To avoid costly preemptions, we consider non-preemptive scheduling: Each VM has to be assigned to a server which has enough residual capacity to accommodate it, and once a VM is assigned to a server, its service cannot be disrupted (preempted). Prior approaches to this problem either have high complexity, require synchronization among the servers, or yield queue sizes/delays which are excessively large. We propose a non-preemptive scheduling algorithm that resolves these issues. In general, given an approximation algorithm to Knapsack with approximation ratio r , our scheduling algorithm can provide rβ fraction of the throughput region for β < r. In the special case of a greedy approximation algorithm to Knapsack, we further show that this condition can be relaxed to β<1. The parameters β and r can be tuned to provide a tradeoff between achievable throughput, delay, and computational complexity of the scheduling algorithm. Finally extensive simulation results using both synthetic and real traffic traces are presented to verify the performance of our algorithm.
{"title":"On Non-Preemptive VM Scheduling in the Cloud","authors":"Konstantinos Psychas, Javad Ghaderi","doi":"10.1145/3219617.3219644","DOIUrl":"https://doi.org/10.1145/3219617.3219644","url":null,"abstract":"We study the problem of scheduling VMs (Virtual Machines) in a distributed server platform, motivated by cloud computing applications. The VMs arrive dynamically over time to the system, and require a certain amount of resources (e.g. memory, CPU, etc) for the duration of their service. To avoid costly preemptions, we consider non-preemptive scheduling: Each VM has to be assigned to a server which has enough residual capacity to accommodate it, and once a VM is assigned to a server, its service cannot be disrupted (preempted). Prior approaches to this problem either have high complexity, require synchronization among the servers, or yield queue sizes/delays which are excessively large. We propose a non-preemptive scheduling algorithm that resolves these issues. In general, given an approximation algorithm to Knapsack with approximation ratio r , our scheduling algorithm can provide rβ fraction of the throughput region for β < r. In the special case of a greedy approximation algorithm to Knapsack, we further show that this condition can be relaxed to β<1. The parameters β and r can be tuned to provide a tradeoff between achievable throughput, delay, and computational complexity of the scheduling algorithm. Finally extensive simulation results using both synthetic and real traffic traces are presented to verify the performance of our algorithm.","PeriodicalId":210440,"journal":{"name":"Abstracts of the 2018 ACM International Conference on Measurement and Modeling of Computer Systems","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-12-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126377420","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Understanding the performance of a pool of servers is crucial for proper dimensioning. One of the main challenges is to take into account the complex interactions between servers that are pooled to process jobs. In particular, a job can generally not be processed by any server of the cluster due to various constraints like data locality. In this paper, we represent these constraints by some assignment graph between jobs and servers. We present a recursive approach to computing performance metrics like mean response times when the server capacities are shared according to balanced fairness. While the computational cost of these formulas can be exponential in the number of servers in the worst case, we illustrate their practical interest by introducing broad classes of pool structures that can be exactly analyzed in polynomial time. This extends considerably the class of models for which explicit performance metrics are accessible.
{"title":"Performance of Balanced Fairness in Resource Pools: A Recursive Approach","authors":"T. Bonald, Céline Comte, Fabien Mathieu","doi":"10.1145/3219617.3219669","DOIUrl":"https://doi.org/10.1145/3219617.3219669","url":null,"abstract":"Understanding the performance of a pool of servers is crucial for proper dimensioning. One of the main challenges is to take into account the complex interactions between servers that are pooled to process jobs. In particular, a job can generally not be processed by any server of the cluster due to various constraints like data locality. In this paper, we represent these constraints by some assignment graph between jobs and servers. We present a recursive approach to computing performance metrics like mean response times when the server capacities are shared according to balanced fairness. While the computational cost of these formulas can be exponential in the number of servers in the worst case, we illustrate their practical interest by introducing broad classes of pool structures that can be exactly analyzed in polynomial time. This extends considerably the class of models for which explicit performance metrics are accessible.","PeriodicalId":210440,"journal":{"name":"Abstracts of the 2018 ACM International Conference on Measurement and Modeling of Computer Systems","volume":"68 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-12-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115202575","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Cellular modems enable ubiquitous Internet connectivities to modern smartphones, but in doing so they become a major contributor to the smartphone energy drain. Understanding modem energy drain requires a detailed power model. The prior art, an RRC-state based power model, was developed primarily to model the modem energy drain of application data transfer. As such, it serves well its original purpose, but is insufficient to study detailed modem behavior, eg. activities in the control plane. In [2], we propose a new methodology of modeling modem power draw behavior at the event-granularity, and develop to our knowledge the first fine-grained modem power model that captures the power draw of all LTE modem radio-on events in different RRC modes. Second, we quantitatively demonstrate the advantages of the new model over the state-based power model under a wide variety of context via controlled experiments. Finally, using our fine-grained modem power model, we perform the first detailed modem energy drain in-the-wild study involving 12 Nexus 6 phones under normal usage by 12 volunteers spanning a total of 348 days. Our study provides the first quantitative analysis of energy drain due to modem control activities in the wild and exposes their correlation with context such as location and user mobility. In this abstracts, we introduce the essence of the methodology and the highlighted results from the in-the-wild study.
{"title":"A Fine-grained Event-based Modem Power Model for Enabling In-depth Modem Energy Drain Analysis","authors":"Xiaomeng Chen, Jiayi Meng","doi":"10.1145/3219617.3219660","DOIUrl":"https://doi.org/10.1145/3219617.3219660","url":null,"abstract":"Cellular modems enable ubiquitous Internet connectivities to modern smartphones, but in doing so they become a major contributor to the smartphone energy drain. Understanding modem energy drain requires a detailed power model. The prior art, an RRC-state based power model, was developed primarily to model the modem energy drain of application data transfer. As such, it serves well its original purpose, but is insufficient to study detailed modem behavior, eg. activities in the control plane. In [2], we propose a new methodology of modeling modem power draw behavior at the event-granularity, and develop to our knowledge the first fine-grained modem power model that captures the power draw of all LTE modem radio-on events in different RRC modes. Second, we quantitatively demonstrate the advantages of the new model over the state-based power model under a wide variety of context via controlled experiments. Finally, using our fine-grained modem power model, we perform the first detailed modem energy drain in-the-wild study involving 12 Nexus 6 phones under normal usage by 12 volunteers spanning a total of 348 days. Our study provides the first quantitative analysis of energy drain due to modem control activities in the wild and exposes their correlation with context such as location and user mobility. In this abstracts, we introduce the essence of the methodology and the highlighted results from the in-the-wild study.","PeriodicalId":210440,"journal":{"name":"Abstracts of the 2018 ACM International Conference on Measurement and Modeling of Computer Systems","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-12-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127932709","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}