Dmitrii Kuvaiskii, O. Oleksenko, Pramod Bhatotia, Pascal Felber, C. Fetzer
Instruction-Level Redundancy (ILR) is a well-known approach to tolerate transient CPU faults. It replicates instructions in a program and inserts periodic checks to detect and correct CPU faults using majority voting, which essentially requires three copies of each instruction and leads to high performance overheads. As SIMD technology can operate simultaneously on several copies of the data, it appears to be a good candidate for decreasing these overheads. To verify this hypothesis, we propose ELZAR, a compiler framework that transforms unmodified multithreaded applications to support triple modular redundancy using Intel AVX extensions for vectorization. Our experience with several benchmark suites and real-world case-studies yields mixed results: while SIMD may be beneficial for some workloads, e.g., CPU-intensive ones with many floating-point operations, it exposes higher overhead than ILR in many applications we tested.
{"title":"ELZAR: Triple Modular Redundancy Using Intel AVX (Practical Experience Report)","authors":"Dmitrii Kuvaiskii, O. Oleksenko, Pramod Bhatotia, Pascal Felber, C. Fetzer","doi":"10.1109/DSN.2016.65","DOIUrl":"https://doi.org/10.1109/DSN.2016.65","url":null,"abstract":"Instruction-Level Redundancy (ILR) is a well-known approach to tolerate transient CPU faults. It replicates instructions in a program and inserts periodic checks to detect and correct CPU faults using majority voting, which essentially requires three copies of each instruction and leads to high performance overheads. As SIMD technology can operate simultaneously on several copies of the data, it appears to be a good candidate for decreasing these overheads. To verify this hypothesis, we propose ELZAR, a compiler framework that transforms unmodified multithreaded applications to support triple modular redundancy using Intel AVX extensions for vectorization. Our experience with several benchmark suites and real-world case-studies yields mixed results: while SIMD may be beneficial for some workloads, e.g., CPU-intensive ones with many floating-point operations, it exposes higher overhead than ILR in many applications we tested.","PeriodicalId":102292,"journal":{"name":"2016 46th Annual IEEE/IFIP International Conference on Dependable Systems and Networks (DSN)","volume":"352 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-10-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122845140","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Memory error exploits rank among the most serious security threats. Of the plethora of memory error containment solutions proposed over the years, most have proven to be too weak in practice. Multi-Variant eXecution (MVX) solutions can potentially detect arbitrary memory error exploits via divergent behavior observed in diversified program variants running in parallel. However, none have found practical applicability in security due to their non-trivial performance limitations. In this paper, we present MvArmor, an MVX system that uses hardware-assisted process virtualization to monitor variants for divergent behavior in an efficient yet secure way. To provide comprehensive protection against memory error exploits, MvArmor relies on a new MVX-aware variant generation strategy. The system supports user-configurable security policies to tune the performance-security trade-off. Our analysis shows that MvArmor can counter many classes of modern attacks at the cost of modest performance overhead, even with conservative detection policies.
{"title":"Secure and Efficient Multi-Variant Execution Using Hardware-Assisted Process Virtualization","authors":"Koen Koning, H. Bos, Cristiano Giuffrida","doi":"10.1109/DSN.2016.46","DOIUrl":"https://doi.org/10.1109/DSN.2016.46","url":null,"abstract":"Memory error exploits rank among the most serious security threats. Of the plethora of memory error containment solutions proposed over the years, most have proven to be too weak in practice. Multi-Variant eXecution (MVX) solutions can potentially detect arbitrary memory error exploits via divergent behavior observed in diversified program variants running in parallel. However, none have found practical applicability in security due to their non-trivial performance limitations. In this paper, we present MvArmor, an MVX system that uses hardware-assisted process virtualization to monitor variants for divergent behavior in an efficient yet secure way. To provide comprehensive protection against memory error exploits, MvArmor relies on a new MVX-aware variant generation strategy. The system supports user-configurable security policies to tune the performance-security trade-off. Our analysis shows that MvArmor can counter many classes of modern attacks at the cost of modest performance overhead, even with conservative detection policies.","PeriodicalId":102292,"journal":{"name":"2016 46th Annual IEEE/IFIP International Conference on Dependable Systems and Networks (DSN)","volume":"59 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-09-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127107815","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Koustubha Bhat, Dirk Vogt, E. V. D. Kouwe, Ben Gras, Lionel Sambuc, A. Tanenbaum, H. Bos, Cristiano Giuffrida
Much research has gone into making operating systems more amenable to recovery and more resilient to crashes. Traditional solutions rely on partitioning the operating system (OS) to contain the effects of crashes within compartments and facilitate modular recovery. However, state dependencies among the compartments hinder recovery that is globally consistent. Such recovery typically requires expensive runtime dependency tracking which results in high performance overhead, highcomplexity and a large Reliable Computing Base (RCB). We propose a lightweight strategy that limits recovery to cases where we can statically and conservatively prove that compartment recovery leads to a globally consistent state - trading recoverable surface for a simpler and smaller RCB with lower performance overhead and maintenance cost. We present OSIRIS, a research OS design prototype that demonstrates efficient and consistent crash recovery. Our evaluation shows that OSIRIS effectively recovers from important classes of real-world software bugs with a modest RCB and low overheads.
{"title":"OSIRIS: Efficient and Consistent Recovery of Compartmentalized Operating Systems","authors":"Koustubha Bhat, Dirk Vogt, E. V. D. Kouwe, Ben Gras, Lionel Sambuc, A. Tanenbaum, H. Bos, Cristiano Giuffrida","doi":"10.1109/DSN.2016.12","DOIUrl":"https://doi.org/10.1109/DSN.2016.12","url":null,"abstract":"Much research has gone into making operating systems more amenable to recovery and more resilient to crashes. Traditional solutions rely on partitioning the operating system (OS) to contain the effects of crashes within compartments and facilitate modular recovery. However, state dependencies among the compartments hinder recovery that is globally consistent. Such recovery typically requires expensive runtime dependency tracking which results in high performance overhead, highcomplexity and a large Reliable Computing Base (RCB). We propose a lightweight strategy that limits recovery to cases where we can statically and conservatively prove that compartment recovery leads to a globally consistent state - trading recoverable surface for a simpler and smaller RCB with lower performance overhead and maintenance cost. We present OSIRIS, a research OS design prototype that demonstrates efficient and consistent crash recovery. Our evaluation shows that OSIRIS effectively recovers from important classes of real-world software bugs with a modest RCB and low overheads.","PeriodicalId":102292,"journal":{"name":"2016 46th Annual IEEE/IFIP International Conference on Dependable Systems and Networks (DSN)","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-09-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123562672","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Cyber attackers abuse the domain name system (DNS) to mystify their attack ecosystems, they systematically generate a huge volume of distinct domain names to make it infeasible for blacklisting approaches to keep up with newly generated malicious domain names. As a solution to this problem, we propose a system for discovering malicious domain names that will likely be abused in future. The key idea with our system is to exploit temporal variation patterns (TVPs) of domain names. The TVPs of domain names include information about how and when a domain name has been listed in legitimate/popular and/or malicious domain name lists. On the basis of this idea, our system actively collects DNS logs, analyzes their TVPs, and predicts whether a given domain name will be used for malicious purposes. Our evaluation revealed that our system can predict malicious domain names 220 days beforehand with a true positive rate of 0.985.
{"title":"DomainProfiler: Discovering Domain Names Abused in Future","authors":"Daiki Chiba, Takeshi Yagi, Mitsuaki Akiyama, Toshiki Shibahara, T. Yada, Tatsuya Mori, Shigeki Goto","doi":"10.1109/DSN.2016.51","DOIUrl":"https://doi.org/10.1109/DSN.2016.51","url":null,"abstract":"Cyber attackers abuse the domain name system (DNS) to mystify their attack ecosystems, they systematically generate a huge volume of distinct domain names to make it infeasible for blacklisting approaches to keep up with newly generated malicious domain names. As a solution to this problem, we propose a system for discovering malicious domain names that will likely be abused in future. The key idea with our system is to exploit temporal variation patterns (TVPs) of domain names. The TVPs of domain names include information about how and when a domain name has been listed in legitimate/popular and/or malicious domain name lists. On the basis of this idea, our system actively collects DNS logs, analyzes their TVPs, and predicts whether a given domain name will be used for malicious purposes. Our evaluation revealed that our system can predict malicious domain names 220 days beforehand with a true positive rate of 0.985.","PeriodicalId":102292,"journal":{"name":"2016 46th Annual IEEE/IFIP International Conference on Dependable Systems and Networks (DSN)","volume":"52 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-09-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115621307","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
When software fault injection is used, faults are typically inserted at the binary or source level. The former is fast but provides poor fault accuracy while the latter cannot scale to large code bases because the program must be rebuilt for each experiment. Alternatives that avoid rebuilding incur large run-time overheads by applying fault injection decisions at run-time. HSFI, our new design, injects faults with all context information from the source level and applies fault injection decisions efficiently on the binary. It places markers in the original code that can be recognized after code generation. We implemented a tool according to the new design and evaluated the time taken per fault injection experiment when using operating systems as targets. We can perform experiments more quickly than other source-based approaches, achieving performance that come close to that of binary-level fault injection while retaining the benefits of source-level fault injection.
{"title":"HSFI: Accurate Fault Injection Scalable to Large Code Bases","authors":"E. V. D. Kouwe, A. Tanenbaum","doi":"10.1109/DSN.2016.22","DOIUrl":"https://doi.org/10.1109/DSN.2016.22","url":null,"abstract":"When software fault injection is used, faults are typically inserted at the binary or source level. The former is fast but provides poor fault accuracy while the latter cannot scale to large code bases because the program must be rebuilt for each experiment. Alternatives that avoid rebuilding incur large run-time overheads by applying fault injection decisions at run-time. HSFI, our new design, injects faults with all context information from the source level and applies fault injection decisions efficiently on the binary. It places markers in the original code that can be recognized after code generation. We implemented a tool according to the new design and evaluated the time taken per fault injection experiment when using operating systems as targets. We can perform experiments more quickly than other source-based approaches, achieving performance that come close to that of binary-level fault injection while retaining the benefits of source-level fault injection.","PeriodicalId":102292,"journal":{"name":"2016 46th Annual IEEE/IFIP International Conference on Dependable Systems and Networks (DSN)","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-09-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125141076","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We consider stochastic models in presence of uncertainty, originating from lack of knowledge of parameters or by unpredictable effects of the environment. We focus on population processes, encompassing a large class of systems, from queueing networks to epidemic spreading. We set up a formal framework for imprecise stochastic processes, where some parameters are allowed to vary in time within a given domain, but with no further constraint. We then consider the limit behaviour of these systems as the population size goes to infinity. We prove that this limit is given by a differential inclusion that can be constructed from the (imprecise) drift. We provide results both for the transient and the steady state behaviour. Finally, we discuss different approaches to compute bounds of the so-obtained differential inclusions, proposing an effective control-theoretic method based on Pontryagin principle for transient bounds. This provides an efficient approach for the analysis and design of large-scale uncertain and imprecise stochastic models. The theoretical results are accompanied by an in-depth analysis of an epidemic model and a queueing network. These examples demonstrate the applicability of the numerical methods and the tightness of the approximation.
{"title":"Mean Field Approximation of Uncertain Stochastic Models","authors":"L. Bortolussi, Nicolas Gast","doi":"10.1109/DSN.2016.34","DOIUrl":"https://doi.org/10.1109/DSN.2016.34","url":null,"abstract":"We consider stochastic models in presence of uncertainty, originating from lack of knowledge of parameters or by unpredictable effects of the environment. We focus on population processes, encompassing a large class of systems, from queueing networks to epidemic spreading. We set up a formal framework for imprecise stochastic processes, where some parameters are allowed to vary in time within a given domain, but with no further constraint. We then consider the limit behaviour of these systems as the population size goes to infinity. We prove that this limit is given by a differential inclusion that can be constructed from the (imprecise) drift. We provide results both for the transient and the steady state behaviour. Finally, we discuss different approaches to compute bounds of the so-obtained differential inclusions, proposing an effective control-theoretic method based on Pontryagin principle for transient bounds. This provides an efficient approach for the analysis and design of large-scale uncertain and imprecise stochastic models. The theoretical results are accompanied by an in-depth analysis of an epidemic model and a queueing network. These examples demonstrate the applicability of the numerical methods and the tightness of the approximation.","PeriodicalId":102292,"journal":{"name":"2016 46th Annual IEEE/IFIP International Conference on Dependable Systems and Networks (DSN)","volume":"50 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-06-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122820804","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Sebastian Junges, Dennis Guck, J. Katoen, M. Stoelinga
Fault tree analysis is a widespread industry standard for assessing system reliability. Standard (static) fault trees model the failure behaviour of systems in dependence of their component failures. To overcome their limited expressive power, common dependability patterns, such as spare management, functional dependencies, and sequencing are considered. A plethora of such dynamic fault trees (DFTs) have been defined in the literature. They differ in e.g., the types of gates (elements), their meaning, expressive power, the way in which failures propagate, how elements are claimed and activated, and how spare races are resolved. This paper systematically uncovers these differences and categorises existing DFT variants. As these differences may have huge impact on the reliability assessment, awareness of these impacts is important when using DFT modelling and analysis.
{"title":"Uncovering Dynamic Fault Trees","authors":"Sebastian Junges, Dennis Guck, J. Katoen, M. Stoelinga","doi":"10.1109/DSN.2016.35","DOIUrl":"https://doi.org/10.1109/DSN.2016.35","url":null,"abstract":"Fault tree analysis is a widespread industry standard for assessing system reliability. Standard (static) fault trees model the failure behaviour of systems in dependence of their component failures. To overcome their limited expressive power, common dependability patterns, such as spare management, functional dependencies, and sequencing are considered. A plethora of such dynamic fault trees (DFTs) have been defined in the literature. They differ in e.g., the types of gates (elements), their meaning, expressive power, the way in which failures propagate, how elements are claimed and activated, and how spare races are resolved. This paper systematically uncovers these differences and categorises existing DFT variants. As these differences may have huge impact on the reliability assessment, awareness of these impacts is important when using DFT modelling and analysis.","PeriodicalId":102292,"journal":{"name":"2016 46th Annual IEEE/IFIP International Conference on Dependable Systems and Networks (DSN)","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115276612","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Although security starts to be taken into account during software development, the tendency for source code to contain vulnerabilities persists. Open source static analysis tools provide a sensible approach to mitigate this problem. However, these tools are programmed to detect a specific set of vulnerabilities and they are often difficult to extend to detect new ones. WAP is a recent popular open source tool that detects vulnerabilities in the source code of web applications written in PHP. The paper addresses the difficulty of extending these tools by proposing a modular and extensible version of the WAP tool, equipping it with "weapons" to detect (and correct) new vulnerability classes. The new version of the tool was evaluated with seven new vulnerability classes using web applications and plugins of the widely-adopted WordPress content management system. The experimental results show that this extensibility allows WAP to find many new (zero-day) vulnerabilities.
{"title":"Equipping WAP with WEAPONS to Detect Vulnerabilities: Practical Experience Report","authors":"Ibéria Medeiros, N. Neves, M. Correia","doi":"10.1109/DSN.2016.63","DOIUrl":"https://doi.org/10.1109/DSN.2016.63","url":null,"abstract":"Although security starts to be taken into account during software development, the tendency for source code to contain vulnerabilities persists. Open source static analysis tools provide a sensible approach to mitigate this problem. However, these tools are programmed to detect a specific set of vulnerabilities and they are often difficult to extend to detect new ones. WAP is a recent popular open source tool that detects vulnerabilities in the source code of web applications written in PHP. The paper addresses the difficulty of extending these tools by proposing a modular and extensible version of the WAP tool, equipping it with \"weapons\" to detect (and correct) new vulnerability classes. The new version of the tool was evaluated with seven new vulnerability classes using web applications and plugins of the widely-adopted WordPress content management system. The experimental results show that this extensibility allows WAP to find many new (zero-day) vulnerabilities.","PeriodicalId":102292,"journal":{"name":"2016 46th Annual IEEE/IFIP International Conference on Dependable Systems and Networks (DSN)","volume":"169 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124722122","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Intrusion detection and forensic analysis techniques depend upon monitors to collect information about possible attacks. Since monitoring can be expensive, however, monitors must be selectively deployed to maximize their overall utility. This paper introduces a methodology both to evaluate monitor deployments quantitatively in terms of security goals and to deploy monitors optimally based on cost constraints. First, we define a model that describes the system assets, deployable monitors, and the relationship between generated data and intrusions. Then, we define a set of metrics that quantify the utility and richness of monitor data with respect to intrusion detection and the cost associated with deployment. Finally, we formulate a method using our model and metrics to determine the cost-optimal, maximum-utility placement of monitors. We present an enterprise Web service use case and illustrate how our metrics can be used to determine optimal monitor deployments for a set of common attacks on Web servers. Our approach is scalable, being able to compute within minutes optimal monitor deployments for systems with hundreds of monitors and attacks.
{"title":"A Quantitative Methodology for Security Monitor Deployment","authors":"Uttam Thakore, G. Weaver, W. Sanders","doi":"10.1109/DSN.2016.10","DOIUrl":"https://doi.org/10.1109/DSN.2016.10","url":null,"abstract":"Intrusion detection and forensic analysis techniques depend upon monitors to collect information about possible attacks. Since monitoring can be expensive, however, monitors must be selectively deployed to maximize their overall utility. This paper introduces a methodology both to evaluate monitor deployments quantitatively in terms of security goals and to deploy monitors optimally based on cost constraints. First, we define a model that describes the system assets, deployable monitors, and the relationship between generated data and intrusions. Then, we define a set of metrics that quantify the utility and richness of monitor data with respect to intrusion detection and the cost associated with deployment. Finally, we formulate a method using our model and metrics to determine the cost-optimal, maximum-utility placement of monitors. We present an enterprise Web service use case and illustrate how our metrics can be used to determine optimal monitor deployments for a set of common attacks on Web servers. Our approach is scalable, being able to compute within minutes optimal monitor deployments for systems with hundreds of monitors and attacks.","PeriodicalId":102292,"journal":{"name":"2016 46th Annual IEEE/IFIP International Conference on Dependable Systems and Networks (DSN)","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115536561","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Filipe Freitas, J. Leitao, Nuno M. Preguiça, R. Rodrigues
While several proposals for the specification and implementation of various consistency models exist, little is known about what is the consistency currently offered by online services with millions of users. Such knowledge is important, not only because it allows for setting the right expectations and justifying the behavior observed by users, but also because it can be used for improving the process of developing applications that use APIs offered by such services. To fill this gap, this paper presents a measurement study of the consistency of the APIs exported by four widely used Internet services, the Facebook Feed, Facebook Groups, Blogger, and Google+. To conduct this study, our work (1) proposes definitions for a set of relevant consistency properties, (2) develops a simple, yet generic methodology comprising a small number of tests, which probe these services from a user perspective, and try to uncover consistency anomalies that are key to our definitions, and (3) reports on the analysis of the data obtained from running these tests for a period of several weeks. Our measurement study shows that some of these services do exhibit consistency anomalies, including some behaviors that may appear counter-intuitive for users, such as the lack of session guarantees for write monotonicity.
{"title":"Characterizing the Consistency of Online Services (Practical Experience Report)","authors":"Filipe Freitas, J. Leitao, Nuno M. Preguiça, R. Rodrigues","doi":"10.1109/DSN.2016.64","DOIUrl":"https://doi.org/10.1109/DSN.2016.64","url":null,"abstract":"While several proposals for the specification and implementation of various consistency models exist, little is known about what is the consistency currently offered by online services with millions of users. Such knowledge is important, not only because it allows for setting the right expectations and justifying the behavior observed by users, but also because it can be used for improving the process of developing applications that use APIs offered by such services. To fill this gap, this paper presents a measurement study of the consistency of the APIs exported by four widely used Internet services, the Facebook Feed, Facebook Groups, Blogger, and Google+. To conduct this study, our work (1) proposes definitions for a set of relevant consistency properties, (2) develops a simple, yet generic methodology comprising a small number of tests, which probe these services from a user perspective, and try to uncover consistency anomalies that are key to our definitions, and (3) reports on the analysis of the data obtained from running these tests for a period of several weeks. Our measurement study shows that some of these services do exhibit consistency anomalies, including some behaviors that may appear counter-intuitive for users, such as the lack of session guarantees for write monotonicity.","PeriodicalId":102292,"journal":{"name":"2016 46th Annual IEEE/IFIP International Conference on Dependable Systems and Networks (DSN)","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116835517","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}