Web users today rely on centralized services for applications such as email, file transfer and chat. Unfortunately, these services create a significant privacy risk: even with a benevolent provider, a single breach can put millions of users' data at risk. One alternative would be for users to host their own servers, but this would be highly expensive for most applications: a single VM deployed in a high-availability mode can cost many dollars per month. In this paper, we propose Deploy It Yourself (DIY), a new model for hosting applications based on serverless computing platforms such as Amazon Lambda. DIY allows users to run a highly available service with much stronger privacy guarantees than current centralized providers, and at a dramatically lower cost than traditional server hosting. DIY only relies on the security of container isolation and a key manager as opposed to the large codebase of a high-level application such as Gmail (and all the Google teams using Gmail data). With attestation technology such as SGX, DIY's execution could also be verified remotely. We show that a DIY email server that sends 500 messages/day costs $0.26/month, which is 50x cheaper than a highly available EC2 server. We also implement a DIY chat service and show that it performs well. Finally, we argue that DIY applications are simple enough to operate that cloud providers could offer a simple "app store" for using them.
{"title":"DIY Hosting for Online Privacy","authors":"Shoumik Palkar, M. Zaharia","doi":"10.1145/3152434.3152459","DOIUrl":"https://doi.org/10.1145/3152434.3152459","url":null,"abstract":"Web users today rely on centralized services for applications such as email, file transfer and chat. Unfortunately, these services create a significant privacy risk: even with a benevolent provider, a single breach can put millions of users' data at risk. One alternative would be for users to host their own servers, but this would be highly expensive for most applications: a single VM deployed in a high-availability mode can cost many dollars per month. In this paper, we propose Deploy It Yourself (DIY), a new model for hosting applications based on serverless computing platforms such as Amazon Lambda. DIY allows users to run a highly available service with much stronger privacy guarantees than current centralized providers, and at a dramatically lower cost than traditional server hosting. DIY only relies on the security of container isolation and a key manager as opposed to the large codebase of a high-level application such as Gmail (and all the Google teams using Gmail data). With attestation technology such as SGX, DIY's execution could also be verified remotely. We show that a DIY email server that sends 500 messages/day costs $0.26/month, which is 50x cheaper than a highly available EC2 server. We also implement a DIY chat service and show that it performs well. Finally, we argue that DIY applications are simple enough to operate that cloud providers could offer a simple \"app store\" for using them.","PeriodicalId":120886,"journal":{"name":"Proceedings of the 16th ACM Workshop on Hot Topics in Networks","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114259884","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Mina Tahmasbi Arashloo, Monia Ghobadi, J. Rexford, D. Walker
Congestion control in multi-tenant data centers is an active area of research because of its significant impact on customer experience, and, consequently, on revenue. Therefore, new algorithms and protocols are expected to emerge as the Cloud evolves. Deploying new congestion control algorithms in the end host's hypervisor allows frequent updates, but processing packets at high rates in the hypervisor and implementing the elements of a congestion control algorithm, such as traffic shapers and timestamps, in software have well-studied inaccuracies and CPU inefficiencies. In this paper, we argue for implementing the entire congestion control algorithm in programmable NICs. To do so, we identify the absence of hardware-aware programming abstractions as the most immediate challenge and solve it using a simple high-level domain specific language called HotCocoa. HotCocoa lies at a sweet spot between the ability to express a broad set of congestion control algorithms and efficient hardware implementation. It offers a set of hardware-aware COngestion COntrol Abstractions that enable operators to specify their algorithm without having to worry about low-level hardware primitives. To evaluate HotCocoa, we implement four congestion control algorithms (Reno, DCTCP, PCC, and TIMELY) and use simulations to show that HotCocoa's implementation of Reno perfectly tracks the behavior of a native implementation in C++.
{"title":"HotCocoa: Hardware Congestion Control Abstractions","authors":"Mina Tahmasbi Arashloo, Monia Ghobadi, J. Rexford, D. Walker","doi":"10.1145/3152434.3152457","DOIUrl":"https://doi.org/10.1145/3152434.3152457","url":null,"abstract":"Congestion control in multi-tenant data centers is an active area of research because of its significant impact on customer experience, and, consequently, on revenue. Therefore, new algorithms and protocols are expected to emerge as the Cloud evolves. Deploying new congestion control algorithms in the end host's hypervisor allows frequent updates, but processing packets at high rates in the hypervisor and implementing the elements of a congestion control algorithm, such as traffic shapers and timestamps, in software have well-studied inaccuracies and CPU inefficiencies. In this paper, we argue for implementing the entire congestion control algorithm in programmable NICs. To do so, we identify the absence of hardware-aware programming abstractions as the most immediate challenge and solve it using a simple high-level domain specific language called HotCocoa. HotCocoa lies at a sweet spot between the ability to express a broad set of congestion control algorithms and efficient hardware implementation. It offers a set of hardware-aware COngestion COntrol Abstractions that enable operators to specify their algorithm without having to worry about low-level hardware primitives. To evaluate HotCocoa, we implement four congestion control algorithms (Reno, DCTCP, PCC, and TIMELY) and use simulations to show that HotCocoa's implementation of Reno perfectly tracks the behavior of a native implementation in C++.","PeriodicalId":120886,"journal":{"name":"Proceedings of the 16th ACM Workshop on Hot Topics in Networks","volume":"156 3","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114015563","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
As blockchain technologies and cryptocurrencies increase in popularity, their decentralization poses unique challenges in network partitions. In traditional distributed systems, network partitions are generally a result of bugs or connectivity failures; the typical goal of the system designer is to automatically recover from such issues as seamlessly as possible. Blockchain-based systems, however, rely on purposeful "forks" to roll out protocol changes in a decentralized manner. Not all users may agree with proposed changes, and thus forks can persist, leading to permanent network partitions. In this paper, we closely study the large-scale fork that occurred in Ethereum, a new blockchain technology that allows for both currency transactions and smart contracts. Ethereum is currently the second-most-valuable cryptocurrency, with a market capitalization of over $28B. We explore the consequences of this fork, showing the impact on the two networks and their mining pools, and how the fork lead to unintentional incentives and security vulnerabilities.
{"title":"Stick a fork in it: Analyzing the Ethereum network partition","authors":"Lucianna Kiffer, Dave Levin, A. Mislove","doi":"10.1145/3152434.3152449","DOIUrl":"https://doi.org/10.1145/3152434.3152449","url":null,"abstract":"As blockchain technologies and cryptocurrencies increase in popularity, their decentralization poses unique challenges in network partitions. In traditional distributed systems, network partitions are generally a result of bugs or connectivity failures; the typical goal of the system designer is to automatically recover from such issues as seamlessly as possible. Blockchain-based systems, however, rely on purposeful \"forks\" to roll out protocol changes in a decentralized manner. Not all users may agree with proposed changes, and thus forks can persist, leading to permanent network partitions. In this paper, we closely study the large-scale fork that occurred in Ethereum, a new blockchain technology that allows for both currency transactions and smart contracts. Ethereum is currently the second-most-valuable cryptocurrency, with a market capitalization of over $28B. We explore the consequences of this fork, showing the impact on the two networks and their mining pools, and how the fork lead to unintentional incentives and security vulnerabilities.","PeriodicalId":120886,"journal":{"name":"Proceedings of the 16th ACM Workshop on Hot Topics in Networks","volume":"69 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121457612","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Doron Zarchy, R. Mittal, Michael Schapira, S. Shenker
Recent years have witnessed a surge of interest in congestion control. Unfortunately, the overwhelmingly large design space along with the increasingly diverse range of application environments makes evaluating congestion control protocols a daunting task. Researchers often use simulation and experiments to examine the performance of designs in specific contexts, but this gives limited insight into the more general properties of these schemes and provides no information about the inherent limits of congestion control designs, e.g., which properties are simultaneously achievable. To complement simulation and experimentation, we advocate a principled framework for reasoning about congestion control protocols. We report on our initial steps in this direction, which was inspired by the axiomatic approach from social choice theory and game theory. We consider several natural requirements ("axioms") from congestion control protocols -- e.g., efficient resource-utilization, loss-avoidance, fairness, stability, and TCP-friendliness -- and investigate which combinations of these can be achieved within a single design. Thus, our framework allows us to investigate the fundamental tradeoffs between desiderata, and to identify where existing and new congestion control architectures fit within the space of possible outcomes. We believe that our results are but a first step in the axiomatic exploration of congestion control and leave the reader with exciting directions for future research.
{"title":"An Axiomatic Approach to Congestion Control","authors":"Doron Zarchy, R. Mittal, Michael Schapira, S. Shenker","doi":"10.1145/3152434.3152445","DOIUrl":"https://doi.org/10.1145/3152434.3152445","url":null,"abstract":"Recent years have witnessed a surge of interest in congestion control. Unfortunately, the overwhelmingly large design space along with the increasingly diverse range of application environments makes evaluating congestion control protocols a daunting task. Researchers often use simulation and experiments to examine the performance of designs in specific contexts, but this gives limited insight into the more general properties of these schemes and provides no information about the inherent limits of congestion control designs, e.g., which properties are simultaneously achievable. To complement simulation and experimentation, we advocate a principled framework for reasoning about congestion control protocols. We report on our initial steps in this direction, which was inspired by the axiomatic approach from social choice theory and game theory. We consider several natural requirements (\"axioms\") from congestion control protocols -- e.g., efficient resource-utilization, loss-avoidance, fairness, stability, and TCP-friendliness -- and investigate which combinations of these can be achieved within a single design. Thus, our framework allows us to investigate the fundamental tradeoffs between desiderata, and to identify where existing and new congestion control architectures fit within the space of possible outcomes. We believe that our results are but a first step in the axiomatic exploration of congestion control and leave the reader with exciting directions for future research.","PeriodicalId":120886,"journal":{"name":"Proceedings of the 16th ACM Workshop on Hot Topics in Networks","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114402289","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Asaf Valadarsky, Michael Schapira, Dafna Shahaf, Aviv Tamar
Recently, much attention has been devoted to the question of whether/when traditional network protocol design, which relies on the application of algorithmic insights by human experts, can be replaced by a data-driven (i.e., machine learning) approach. We explore this question in the context of the arguably most fundamental networking task: routing. Can ideas and techniques from machine learning (ML) be leveraged to automatically generate "good" routing configurations? We focus on the classical setting of intradomain traffic engineering. We observe that this context poses significant challenges for data-driven protocol design. Our preliminary results regarding the power of data-driven routing suggest that applying ML (specifically, deep reinforcement learning) to this context yields high performance and is a promising direction for further research. We outline a research agenda for ML-guided routing.
{"title":"Learning to Route","authors":"Asaf Valadarsky, Michael Schapira, Dafna Shahaf, Aviv Tamar","doi":"10.1145/3152434.3152441","DOIUrl":"https://doi.org/10.1145/3152434.3152441","url":null,"abstract":"Recently, much attention has been devoted to the question of whether/when traditional network protocol design, which relies on the application of algorithmic insights by human experts, can be replaced by a data-driven (i.e., machine learning) approach. We explore this question in the context of the arguably most fundamental networking task: routing. Can ideas and techniques from machine learning (ML) be leveraged to automatically generate \"good\" routing configurations? We focus on the classical setting of intradomain traffic engineering. We observe that this context poses significant challenges for data-driven protocol design. Our preliminary results regarding the power of data-driven routing suggest that applying ML (specifically, deep reinforcement learning) to this context yields high performance and is a promising direction for further research. We outline a research agenda for ML-guided routing.","PeriodicalId":120886,"journal":{"name":"Proceedings of the 16th ACM Workshop on Hot Topics in Networks","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128459057","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Mihovil Bartulovic, Junchen Jiang, Sivaraman Balakrishnan, V. Sekar, B. Sinopoli
Recent efforts highlight the promise of data-driven approaches to optimize network decisions. Many such efforts use trace-driven evaluation; i.e., running offline analysis on network traces to estimate the potential benefits of different policies before running them in practice. Unfortunately, such frameworks can have fundamental pitfalls (e.g., skews due to previous policies that were used in the data collection phase and insufficient data for specific subpopulations) that could lead to misleading estimates and ultimately suboptimal decisions. In this paper, we shed light on such pitfalls and identify a promising roadmap to address these pitfalls by leveraging parallels in causal inference, namely the Doubly Robust estimator.
{"title":"Biases in Data-Driven Networking, and What to Do About Them","authors":"Mihovil Bartulovic, Junchen Jiang, Sivaraman Balakrishnan, V. Sekar, B. Sinopoli","doi":"10.1145/3152434.3152448","DOIUrl":"https://doi.org/10.1145/3152434.3152448","url":null,"abstract":"Recent efforts highlight the promise of data-driven approaches to optimize network decisions. Many such efforts use trace-driven evaluation; i.e., running offline analysis on network traces to estimate the potential benefits of different policies before running them in practice. Unfortunately, such frameworks can have fundamental pitfalls (e.g., skews due to previous policies that were used in the data collection phase and insufficient data for specific subpopulations) that could lead to misleading estimates and ultimately suboptimal decisions. In this paper, we shed light on such pitfalls and identify a promising roadmap to address these pitfalls by leveraging parallels in causal inference, namely the Doubly Robust estimator.","PeriodicalId":120886,"journal":{"name":"Proceedings of the 16th ACM Workshop on Hot Topics in Networks","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122778934","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
As researchers, we are aware of how hard it is to obtain access to vantage points in the Internet. Experimentation platforms are useful tools, but they are also: 1) paid, either via a membership fee or by resource sharing, 2) unreliable, nodes come and go, 3) outdated, often still run on their original hardware and OS. While one could build yet-another platform with up-to-date and reliable hardware and software, it is hard to imagine one which is free. This is the goal of this paper: we set out to build FreeLab, a free experimentation platform which also aims to be reliable and up-to-date. The key idea behind FreeLab is that experiments run directly at its user machines, while traffic is relayed by free vantage points in the Internet (web and SOCKS proxies, and DNS resolvers). FreeLab is thus free of access by design and up-to-date as far as its users maintain their experimenting machines. Reliability is a key challenge due to the volatile nature of free resources, and the introduction of errors (path inflation, header manipulation, bandwidth shrinkage) caused by traffic relays.
{"title":"FreeLab: A Free Experimentation Platform","authors":"Matteo Varvello, Diego Perino","doi":"10.1145/3152434.3152436","DOIUrl":"https://doi.org/10.1145/3152434.3152436","url":null,"abstract":"As researchers, we are aware of how hard it is to obtain access to vantage points in the Internet. Experimentation platforms are useful tools, but they are also: 1) paid, either via a membership fee or by resource sharing, 2) unreliable, nodes come and go, 3) outdated, often still run on their original hardware and OS. While one could build yet-another platform with up-to-date and reliable hardware and software, it is hard to imagine one which is free. This is the goal of this paper: we set out to build FreeLab, a free experimentation platform which also aims to be reliable and up-to-date. The key idea behind FreeLab is that experiments run directly at its user machines, while traffic is relayed by free vantage points in the Internet (web and SOCKS proxies, and DNS resolvers). FreeLab is thus free of access by design and up-to-date as far as its users maintain their experimenting machines. Reliability is a key challenge due to the volatile nature of free resources, and the introduction of errors (path inflation, header manipulation, bandwidth shrinkage) caused by traffic relays.","PeriodicalId":120886,"journal":{"name":"Proceedings of the 16th ACM Workshop on Hot Topics in Networks","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129867311","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Amedeo Sapio, I. Abdelaziz, Abdulla Aldilaijan, M. Canini, Panos Kalnis
Programmable data plane hardware creates new opportunities for infusing intelligence into the network. This raises a fundamental question: what kinds of computation should be delegated to the network? In this paper, we discuss the opportunities and challenges for co-designing data center distributed systems with their network layer. We believe that the time has finally come for offloading part of their computation to execute in-network. However, in-network computation tasks must be judiciously crafted to match the limitations of the network machine architecture of programmable devices. With the help of our experiments on machine learning and graph analytics workloads, we identify that aggregation functions raise opportunities to exploit the limited computation power of networking hardware to lessen network congestion and improve the overall application performance. Moreover, as a proof-of-concept, we propose Daiet, a system that performs in-network data aggregation. Experimental results with an initial prototype show a large data reduction ratio (86.9%-89.3%) and a similar decrease in the workers' computation time.
{"title":"In-Network Computation is a Dumb Idea Whose Time Has Come","authors":"Amedeo Sapio, I. Abdelaziz, Abdulla Aldilaijan, M. Canini, Panos Kalnis","doi":"10.1145/3152434.3152461","DOIUrl":"https://doi.org/10.1145/3152434.3152461","url":null,"abstract":"Programmable data plane hardware creates new opportunities for infusing intelligence into the network. This raises a fundamental question: what kinds of computation should be delegated to the network? In this paper, we discuss the opportunities and challenges for co-designing data center distributed systems with their network layer. We believe that the time has finally come for offloading part of their computation to execute in-network. However, in-network computation tasks must be judiciously crafted to match the limitations of the network machine architecture of programmable devices. With the help of our experiments on machine learning and graph analytics workloads, we identify that aggregation functions raise opportunities to exploit the limited computation power of networking hardware to lessen network congestion and improve the overall application performance. Moreover, as a proof-of-concept, we propose Daiet, a system that performs in-network data aggregation. Experimental results with an initial prototype show a large data reduction ratio (86.9%-89.3%) and a similar decrease in the workers' computation time.","PeriodicalId":120886,"journal":{"name":"Proceedings of the 16th ACM Workshop on Hot Topics in Networks","volume":"2013 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129907771","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Mathias Lécuyer, Joshua Lockerman, Lamont Nelson, S. Sen, Amit Sharma, Aleksandrs Slivkins
We view randomization through the lens of statistical machine learning: as a powerful resource for offline optimization. Cloud systems make randomized decisions all the time (e.g., in load balancing), yet this randomness is rarely used for optimization after-the-fact. By casting system decisions in the framework of reinforcement learning, we show how to collect data from existing systems, without modifying them, to evaluate new policies, without deploying them. Our methodology, called harvesting randomness, has the potential to accurately estimate a policy's performance without the risk or cost of deploying it on live traffic. We quantify this optimization power and apply it to a real machine health scenario in Azure Compute. We also apply it to two prototyped scenarios, for load balancing (Nginx) and caching (Redis), with much less success, and use them to identify the systems and machine learning challenges to achieving our goal. Our long-term agenda is to harvest the randomness in distributed systems to develop non-invasive and efficient techniques for optimizing them. Like CPU cycles and bandwidth, we view randomness as a valuable resource being wasted by the cloud, and we seek to remedy this.
{"title":"Harvesting Randomness to Optimize Distributed Systems","authors":"Mathias Lécuyer, Joshua Lockerman, Lamont Nelson, S. Sen, Amit Sharma, Aleksandrs Slivkins","doi":"10.1145/3152434.3152435","DOIUrl":"https://doi.org/10.1145/3152434.3152435","url":null,"abstract":"We view randomization through the lens of statistical machine learning: as a powerful resource for offline optimization. Cloud systems make randomized decisions all the time (e.g., in load balancing), yet this randomness is rarely used for optimization after-the-fact. By casting system decisions in the framework of reinforcement learning, we show how to collect data from existing systems, without modifying them, to evaluate new policies, without deploying them. Our methodology, called harvesting randomness, has the potential to accurately estimate a policy's performance without the risk or cost of deploying it on live traffic. We quantify this optimization power and apply it to a real machine health scenario in Azure Compute. We also apply it to two prototyped scenarios, for load balancing (Nginx) and caching (Redis), with much less success, and use them to identify the systems and machine learning challenges to achieving our goal. Our long-term agenda is to harvest the randomness in distributed systems to develop non-invasive and efficient techniques for optimizing them. Like CPU cycles and bandwidth, we view randomness as a valuable resource being wasted by the cloud, and we seek to remedy this.","PeriodicalId":120886,"journal":{"name":"Proceedings of the 16th ACM Workshop on Hot Topics in Networks","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132518908","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Hira Javaid, Hafiz Kamran Khalil, Z. A. Uzmi, I. Qazi
Online advertising plays a critical role in enabling the free Web by allowing publishers to monetize their services. However, the rise in internet censorship events globally poses an economic threat to the advertising ecosystem. This paper studies this interplay and presents Advention, a system that provides censorship circumvention while serving relevant ads. Advention leverages the observation that ad systems are usually hosted on domains that are different from the publisher domains and are almost always uncensored. Taking cue from this, Advention fetches ads via the direct, uncensored, channel between users and the ad system. Preliminary results show that Advention not only offers high ad relevance compared to other popular relay-based circumvention tools, it also offers smaller page load times.
{"title":"Online Advertising under Internet Censorship","authors":"Hira Javaid, Hafiz Kamran Khalil, Z. A. Uzmi, I. Qazi","doi":"10.1145/3152434.3152455","DOIUrl":"https://doi.org/10.1145/3152434.3152455","url":null,"abstract":"Online advertising plays a critical role in enabling the free Web by allowing publishers to monetize their services. However, the rise in internet censorship events globally poses an economic threat to the advertising ecosystem. This paper studies this interplay and presents Advention, a system that provides censorship circumvention while serving relevant ads. Advention leverages the observation that ad systems are usually hosted on domains that are different from the publisher domains and are almost always uncensored. Taking cue from this, Advention fetches ads via the direct, uncensored, channel between users and the ad system. Preliminary results show that Advention not only offers high ad relevance compared to other popular relay-based circumvention tools, it also offers smaller page load times.","PeriodicalId":120886,"journal":{"name":"Proceedings of the 16th ACM Workshop on Hot Topics in Networks","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114970687","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}