首页 > 最新文献

Proceedings of the First Workshop on Principles and Practice of Consistency for Distributed Data最新文献

英文 中文
On the consistency of heterogeneous composite objects 异构复合对象的一致性研究
A. Bessani, Ricardo Mendes, Tiago Oliveira
Several recent cloud-backed storage systems advocates the composition of a number of cloud services for improving performance and fault tolerance (e.g., [1, 3, 4]). An interesting aspect of these compositions is that the consistency guarantees they provide depend on the consistency of such base services, which are normally different.
最近一些云支持的存储系统提倡组合一些云服务来提高性能和容错性(例如,[1,3,4])。这些组合的一个有趣的方面是,它们提供的一致性保证依赖于这些基本服务的一致性,而这些基本服务通常是不同的。
{"title":"On the consistency of heterogeneous composite objects","authors":"A. Bessani, Ricardo Mendes, Tiago Oliveira","doi":"10.1145/2745947.2746687","DOIUrl":"https://doi.org/10.1145/2745947.2746687","url":null,"abstract":"Several recent cloud-backed storage systems advocates the composition of a number of cloud services for improving performance and fault tolerance (e.g., [1, 3, 4]). An interesting aspect of these compositions is that the consistency guarantees they provide depend on the consistency of such base services, which are normally different.","PeriodicalId":332245,"journal":{"name":"Proceedings of the First Workshop on Principles and Practice of Consistency for Distributed Data","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2015-04-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121464565","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Designing a causally consistent protocol for geo-distributed partial replication 为地理分布的部分复制设计因果一致的协议
Tyler Crain, M. Shapiro
Modern internet applications require scalability to millions of clients, response times in the tens of milliseconds, and availability in the presence of partitions, hardware faults and even disasters. To obtain these requirements, applications are usually geo-replicated across several data centres (DCs) spread throughout the world, providing clients with fast access to nearby DCs and fault-tolerance in case of a DC out-age. Using multiple replicas also has disadvantages, not only does this incur extra storage, bandwidth and hardware costs, but programming these systems becomes more difficult. To address the additional hardware costs, data is often partially replicated, meaning that only certain DCs will keep a copy of certain data, for example in a key-value store it may only store values corresponding to a portion of the keys. Additionally, to address the issue of programming these systems, consistency protocols are run on top ensuring different guarantees for the data, but as shown by the CAP theorem, strong consistency, availability, and partition tolerance cannot be ensured at the same time. For many applications availability is paramout, thus strong consistency is exchanged for weaker consistencies allowing concurrent writes like causal consistency. Unfortunately these protocols are not designed with partial replication in mind and either end up not supporting it or do so in an inefficient manner. In this work we will look at why this happens and propose a protocol designed to support partial replication under causal consistency more efficiently.
现代互联网应用程序需要可伸缩性到数以百万计的客户机,响应时间在几十毫秒内,并且在存在分区、硬件故障甚至灾难的情况下可用性。为了获得这些需求,应用程序通常跨分布在世界各地的几个数据中心(DC)进行地理复制,为客户提供对附近DC的快速访问和在DC停机时的容错能力。使用多个副本也有缺点,这不仅会产生额外的存储、带宽和硬件成本,而且对这些系统进行编程也变得更加困难。为了解决额外的硬件成本,数据通常是部分复制的,这意味着只有某些dc会保留某些数据的副本,例如,在键值存储中,它可能只存储与键的一部分对应的值。此外,为了解决这些系统的编程问题,一致性协议在上面运行,以确保对数据的不同保证,但正如CAP定理所示,不能同时确保强一致性、可用性和分区容忍度。对于许多应用程序来说,可用性是最重要的,因此强一致性被交换为允许并发写的较弱一致性,比如因果一致性。不幸的是,这些协议在设计时并没有考虑到部分复制,要么最终不支持部分复制,要么以低效的方式支持部分复制。在这项工作中,我们将研究为什么会发生这种情况,并提出一个旨在更有效地支持因果一致性下的部分复制的协议。
{"title":"Designing a causally consistent protocol for geo-distributed partial replication","authors":"Tyler Crain, M. Shapiro","doi":"10.1145/2745947.2745953","DOIUrl":"https://doi.org/10.1145/2745947.2745953","url":null,"abstract":"Modern internet applications require scalability to millions of clients, response times in the tens of milliseconds, and availability in the presence of partitions, hardware faults and even disasters. To obtain these requirements, applications are usually geo-replicated across several data centres (DCs) spread throughout the world, providing clients with fast access to nearby DCs and fault-tolerance in case of a DC out-age. Using multiple replicas also has disadvantages, not only does this incur extra storage, bandwidth and hardware costs, but programming these systems becomes more difficult. To address the additional hardware costs, data is often partially replicated, meaning that only certain DCs will keep a copy of certain data, for example in a key-value store it may only store values corresponding to a portion of the keys. Additionally, to address the issue of programming these systems, consistency protocols are run on top ensuring different guarantees for the data, but as shown by the CAP theorem, strong consistency, availability, and partition tolerance cannot be ensured at the same time. For many applications availability is paramout, thus strong consistency is exchanged for weaker consistencies allowing concurrent writes like causal consistency. Unfortunately these protocols are not designed with partial replication in mind and either end up not supporting it or do so in an inefficient manner. In this work we will look at why this happens and propose a protocol designed to support partial replication under causal consistency more efficiently.","PeriodicalId":332245,"journal":{"name":"Proceedings of the First Workshop on Principles and Practice of Consistency for Distributed Data","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2015-04-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128214216","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 24
Reducing the vulnerability window in distributed transactional protocols 减少分布式事务协议中的漏洞窗口
Manuel Bravo, P. Romano, L. Rodrigues, P. V. Roy
In this paper, we introduce a technique that can be used by distributed transactional protocols to reduce the vulnerability window of transactions. For this purpose, we propose a so far unexplored (to the best of our knowledge) usage of hybrid clocks. On one hand, loosely synchronized physical clocks are used to maximize the freshness of the snapshots used by transactions to read. On the other hand, logical clocks are used to reduce the extent to which the snapshot of update transactions is advanced upon their commit. We claim that the joint usage of these two techniques can potentially reduce the abort rate in comparison to previous protocols such as Clock-SI, GMU, and SCORe.
在本文中,我们介绍了一种可以用于分布式事务协议的技术来减少事务的漏洞窗口。为此,我们提出了一种迄今为止尚未探索过的(据我们所知)混合时钟的用法。一方面,松散同步的物理时钟用于最大限度地提高事务读取所用快照的新鲜度。另一方面,逻辑时钟用于减少更新事务的快照在提交时的推进程度。我们声称,与之前的协议(如Clock-SI、GMU和SCORe)相比,这两种技术的联合使用可以潜在地降低中断率。
{"title":"Reducing the vulnerability window in distributed transactional protocols","authors":"Manuel Bravo, P. Romano, L. Rodrigues, P. V. Roy","doi":"10.1145/2745947.2746688","DOIUrl":"https://doi.org/10.1145/2745947.2746688","url":null,"abstract":"In this paper, we introduce a technique that can be used by distributed transactional protocols to reduce the vulnerability window of transactions. For this purpose, we propose a so far unexplored (to the best of our knowledge) usage of hybrid clocks. On one hand, loosely synchronized physical clocks are used to maximize the freshness of the snapshots used by transactions to read. On the other hand, logical clocks are used to reduce the extent to which the snapshot of update transactions is advanced upon their commit. We claim that the joint usage of these two techniques can potentially reduce the abort rate in comparison to previous protocols such as Clock-SI, GMU, and SCORe.","PeriodicalId":332245,"journal":{"name":"Proceedings of the First Workshop on Principles and Practice of Consistency for Distributed Data","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2015-04-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132208859","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Claret: using data types for highly concurrent distributed transactions Claret:为高度并发的分布式事务使用数据类型
B. Holt, Irene Zhang, Dan R. K. Ports, M. Oskin, L. Ceze
Out of the many NoSQL databases in use today, some that provide simple data structures for records, such as Redis and MongoDB, are now becoming popular. Building applications out of these complex data types provides a way to communicate intent to the database system without sacrificing flexibility or committing to a fixed schema. Currently this capability is leveraged in limited ways, such as to ensure related values are co-located, or for atomic updates. There are many ways data types can be used to make databases more efficient that are not yet being exploited. We explore several ways of leveraging abstract data type (ADT) semantics in databases, focusing primarily on commutativity. Using a Twitter clone as a case study, we show that using commutativity can reduce transaction abort rates for high-contention, update-heavy workloads that arise in real social networks. We conclude that ADTs are a good abstraction for database records, providing a safe and expressive programming model with ample opportunities for optimization, making databases more safe and scalable.
在目前使用的许多NoSQL数据库中,一些为记录提供简单数据结构的数据库,如Redis和MongoDB,现在正变得流行起来。基于这些复杂的数据类型构建应用程序提供了一种向数据库系统传达意图的方法,而不会牺牲灵活性或采用固定的模式。目前,这种功能仅以有限的方式得到利用,例如确保相关值位于同一位置,或用于原子更新。有许多方法可以使用数据类型来提高数据库的效率,但这些方法尚未得到充分利用。我们探讨了在数据库中利用抽象数据类型(ADT)语义的几种方法,主要关注交换性。使用Twitter克隆作为案例研究,我们展示了使用交换性可以减少在真实社交网络中出现的高争用、更新繁重的工作负载的事务中断率。我们得出结论,adt是一个很好的数据库记录抽象,它提供了一个安全和富有表现力的编程模型,并提供了充足的优化机会,使数据库更加安全和可伸缩。
{"title":"Claret: using data types for highly concurrent distributed transactions","authors":"B. Holt, Irene Zhang, Dan R. K. Ports, M. Oskin, L. Ceze","doi":"10.1145/2745947.2745951","DOIUrl":"https://doi.org/10.1145/2745947.2745951","url":null,"abstract":"Out of the many NoSQL databases in use today, some that provide simple data structures for records, such as Redis and MongoDB, are now becoming popular. Building applications out of these complex data types provides a way to communicate intent to the database system without sacrificing flexibility or committing to a fixed schema. Currently this capability is leveraged in limited ways, such as to ensure related values are co-located, or for atomic updates. There are many ways data types can be used to make databases more efficient that are not yet being exploited. We explore several ways of leveraging abstract data type (ADT) semantics in databases, focusing primarily on commutativity. Using a Twitter clone as a case study, we show that using commutativity can reduce transaction abort rates for high-contention, update-heavy workloads that arise in real social networks. We conclude that ADTs are a good abstraction for database records, providing a safe and expressive programming model with ample opportunities for optimization, making databases more safe and scalable.","PeriodicalId":332245,"journal":{"name":"Proceedings of the First Workshop on Principles and Practice of Consistency for Distributed Data","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2015-04-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116288614","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
An empirical perspective on causal consistency 因果一致性的经验观点
Alejandro Z. Tomsic, Tyler Crain, M. Shapiro
Causal consistency is the strongest consistency model under which low-latency and high-availability can be achieved. In the past few years, many causally consistent storage systems have been developed. The long-term goal of this initial work is to perform a deep study and comparison of the different implementations of causal consistency. We identify that protocols that provide causal consistency share the well-known DUR (deferred update replication) algorithmic structure and observe that existing implementations of causal consistency fall into a sub-category of DUR that we name A-DUR (Asynchronous-DUR). In this work, we present the A-DUR algorithmic structure, the pseudocode for the instantiation of two causally consistent protocols under the G-DUR framework, and describe the empirical study we intend to perform on causal consistency.
因果一致性是最强的一致性模型,在该模型下可以实现低延迟和高可用性。在过去的几年中,已经开发了许多因果一致性存储系统。这项初步工作的长期目标是对因果一致性的不同实现进行深入研究和比较。我们确定提供因果一致性的协议共享众所周知的DUR(延迟更新复制)算法结构,并观察到现有的因果一致性实现属于DUR的子类别,我们将其命名为a -DUR(异步DUR)。在这项工作中,我们提出了A-DUR算法结构,在G-DUR框架下实例化两个因果一致协议的伪代码,并描述了我们打算对因果一致性进行的实证研究。
{"title":"An empirical perspective on causal consistency","authors":"Alejandro Z. Tomsic, Tyler Crain, M. Shapiro","doi":"10.1145/2745947.2745949","DOIUrl":"https://doi.org/10.1145/2745947.2745949","url":null,"abstract":"Causal consistency is the strongest consistency model under which low-latency and high-availability can be achieved. In the past few years, many causally consistent storage systems have been developed. The long-term goal of this initial work is to perform a deep study and comparison of the different implementations of causal consistency. We identify that protocols that provide causal consistency share the well-known DUR (deferred update replication) algorithmic structure and observe that existing implementations of causal consistency fall into a sub-category of DUR that we name A-DUR (Asynchronous-DUR). In this work, we present the A-DUR algorithmic structure, the pseudocode for the instantiation of two causally consistent protocols under the G-DUR framework, and describe the empirical study we intend to perform on causal consistency.","PeriodicalId":332245,"journal":{"name":"Proceedings of the First Workshop on Principles and Practice of Consistency for Distributed Data","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2015-04-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133639996","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Adaptive strength geo-replication strategy 自适应强度两地复制策略
Amadeo Ascó Signes, Annette Bieniusa
The amount of data being processed in Data Centres (DCs) keeps growing at an enormous rate so that full replication may start being impractical. The application of replication between DCs is used to increase data availability in the presence of site failures and to reduce latency by accessing the data closely located, if possible. This means that replicating the data only in some of the DCs is becoming more critical in order to reduce the costs of keeping the data (weakly) consistent while maintaining high availability (scalability) and low access costs. When data read and write request patterns change, then deciding which data should be replicated and where needs to be made dynamically. Given that the problem of finding an optimal replication schema in a general network has been shown to be NP-complete for the static case, so it is unlikely that there exists a general algorithm for an optimal solution to the dynamic problem. We present here a new adaptive bio--inspired replication strategy, which is completely decentralised, adaptive, and event-driven, inspired on the Ant Colony algorithm.
数据中心(dc)中处理的数据量以惊人的速度增长,因此完全复制可能开始变得不切实际。数据中心之间的复制应用程序用于在站点出现故障时提高数据可用性,并通过访问位于较近位置的数据(如果可能)来减少延迟。这意味着,为了降低保持数据(弱)一致性的成本,同时保持高可用性(可伸缩性)和低访问成本,仅在某些数据中心复制数据变得越来越重要。当数据读写请求模式发生变化时,然后动态地决定应该复制哪些数据以及需要在何处进行复制。鉴于在一般网络中寻找最优复制模式的问题已被证明在静态情况下是np完全的,因此不太可能存在用于动态问题的最优解的通用算法。我们提出了一种新的自适应生物启发复制策略,它是完全分散的,自适应的,事件驱动的,灵感来自蚁群算法。
{"title":"Adaptive strength geo-replication strategy","authors":"Amadeo Ascó Signes, Annette Bieniusa","doi":"10.1145/2745947.2745950","DOIUrl":"https://doi.org/10.1145/2745947.2745950","url":null,"abstract":"The amount of data being processed in Data Centres (DCs) keeps growing at an enormous rate so that full replication may start being impractical. The application of replication between DCs is used to increase data availability in the presence of site failures and to reduce latency by accessing the data closely located, if possible. This means that replicating the data only in some of the DCs is becoming more critical in order to reduce the costs of keeping the data (weakly) consistent while maintaining high availability (scalability) and low access costs. When data read and write request patterns change, then deciding which data should be replicated and where needs to be made dynamically. Given that the problem of finding an optimal replication schema in a general network has been shown to be NP-complete for the static case, so it is unlikely that there exists a general algorithm for an optimal solution to the dynamic problem. We present here a new adaptive bio--inspired replication strategy, which is completely decentralised, adaptive, and event-driven, inspired on the Ant Colony algorithm.","PeriodicalId":332245,"journal":{"name":"Proceedings of the First Workshop on Principles and Practice of Consistency for Distributed Data","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2015-04-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132451175","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Collaborative offline web applications using conflict-free replicated data types 使用无冲突复制数据类型的协作式离线web应用程序
Santiago J. Castiñeira, Annette Bieniusa
The use cases for Conflict-free Replicated Data Types (CRDTs) that are studied in the literature are limited to collaborative editing applications and data stores. The communication protocols used to distribute replica updates in these scenarios are usually assumed to be some form of highly scalable gossip protocol. In this paper, a new type of application for CRDTs is introduced and studied: collaborative offline web applications. We demonstrate the feasibility of CRDTs in this scenario, and analyze the trade-offs of three existing communication protocols that can be employed for these applications.
文献中研究的无冲突复制数据类型(crdt)的用例仅限于协作编辑应用程序和数据存储。在这些场景中,用于分发副本更新的通信协议通常被假定为某种形式的高度可伸缩的八卦协议。本文介绍并研究了crdt的一种新型应用:协同离线web应用。我们在此场景中演示了crdt的可行性,并分析了可用于这些应用程序的三种现有通信协议的权衡。
{"title":"Collaborative offline web applications using conflict-free replicated data types","authors":"Santiago J. Castiñeira, Annette Bieniusa","doi":"10.1145/2745947.2745952","DOIUrl":"https://doi.org/10.1145/2745947.2745952","url":null,"abstract":"The use cases for Conflict-free Replicated Data Types (CRDTs) that are studied in the literature are limited to collaborative editing applications and data stores. The communication protocols used to distribute replica updates in these scenarios are usually assumed to be some form of highly scalable gossip protocol. In this paper, a new type of application for CRDTs is introduced and studied: collaborative offline web applications. We demonstrate the feasibility of CRDTs in this scenario, and analyze the trade-offs of three existing communication protocols that can be employed for these applications.","PeriodicalId":332245,"journal":{"name":"Proceedings of the First Workshop on Principles and Practice of Consistency for Distributed Data","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2015-04-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115145598","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
A study of CRDTs that do computations 做计算的crdt的研究
David Navalho, S. Duarte, Nuno M. Preguiça
A CRDT is a data type specially designed to allow multiple instances to be replicated and modified without coordination, while providing an automatic mechanism to merge concurrent updates that guarantee eventual consistency. In this paper we present a brief study of computational CRDTs, a class of CRDTs whose state is the result of a computation over the executed updates. We propose three generic designs that reduce the amount of information that each replica maintains and propagates for synchronizations. For each of the designs, we discuss the properties that the function being computed needs to satisfy.
CRDT是一种专门设计的数据类型,允许在没有协调的情况下复制和修改多个实例,同时提供一种自动机制来合并并发更新,从而保证最终的一致性。在本文中,我们简要地研究了计算crdt,这是一类crdt,其状态是对执行的更新进行计算的结果。我们提出了三种通用设计,以减少每个副本维护和传播同步的信息量。对于每一种设计,我们讨论了所计算的函数需要满足的性质。
{"title":"A study of CRDTs that do computations","authors":"David Navalho, S. Duarte, Nuno M. Preguiça","doi":"10.1145/2745947.2745948","DOIUrl":"https://doi.org/10.1145/2745947.2745948","url":null,"abstract":"A CRDT is a data type specially designed to allow multiple instances to be replicated and modified without coordination, while providing an automatic mechanism to merge concurrent updates that guarantee eventual consistency. In this paper we present a brief study of computational CRDTs, a class of CRDTs whose state is the result of a computation over the executed updates. We propose three generic designs that reduce the amount of information that each replica maintains and propagates for synchronizations. For each of the designs, we discuss the properties that the function being computed needs to satisfy.","PeriodicalId":332245,"journal":{"name":"Proceedings of the First Workshop on Principles and Practice of Consistency for Distributed Data","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2015-04-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128271383","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Minimizing coordination in replicated systems 最小化复制系统中的协调
Cheng Li, J. Leitao, Allen Clement, Nuno M. Preguiça, R. Rodrigues
Replication has been widely adopted to build highly scalable services, but this goal is often compromised by the coordination required to ensure application-specific properties such as state convergence and invariant preservation. In this paper, we propose a principled mechanism to minimize coordination in replicated systems via the following components: a) a notion of restriction over pairs of operations, which captures the fact that the two operations must be ordered w.r.t. each other in any partial order; b) a generic consistency model which, given a set of restrictions, requires those restrictions to be met in all admissible partial orders; c) principles for identifying a minimal set of restrictions to ensure the above properties; and d) a coordination service that dynamically maps restrictions to the most efficient coordination protocols. Our preliminary experience with example applications shows that we are able to determine a minimal coordination strategy.
复制已被广泛用于构建高度可伸缩的服务,但这一目标经常受到确保特定于应用程序的属性(如状态收敛和不变保持)所需的协调的影响。在本文中,我们提出了一个原则性的机制,通过以下组件来最小化复制系统中的协调:a)操作对上的限制概念,它捕获了两个操作必须以任意偏序彼此w.r.t.有序的事实;B)一般一致性模型,给定一组限制条件,要求这些限制条件在所有可允许的偏序中都满足;C)确定确保上述特性的最小限制集的原则;d)将限制动态映射到最有效的协调协议的协调服务。我们对示例应用程序的初步经验表明,我们能够确定最小的协调策略。
{"title":"Minimizing coordination in replicated systems","authors":"Cheng Li, J. Leitao, Allen Clement, Nuno M. Preguiça, R. Rodrigues","doi":"10.1145/2745947.2745955","DOIUrl":"https://doi.org/10.1145/2745947.2745955","url":null,"abstract":"Replication has been widely adopted to build highly scalable services, but this goal is often compromised by the coordination required to ensure application-specific properties such as state convergence and invariant preservation. In this paper, we propose a principled mechanism to minimize coordination in replicated systems via the following components: a) a notion of restriction over pairs of operations, which captures the fact that the two operations must be ordered w.r.t. each other in any partial order; b) a generic consistency model which, given a set of restrictions, requires those restrictions to be met in all admissible partial orders; c) principles for identifying a minimal set of restrictions to ensure the above properties; and d) a coordination service that dynamically maps restrictions to the most efficient coordination protocols. Our preliminary experience with example applications shows that we are able to determine a minimal coordination strategy.","PeriodicalId":332245,"journal":{"name":"Proceedings of the First Workshop on Principles and Practice of Consistency for Distributed Data","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2015-04-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133381252","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Lasp: a language for distributed, eventually consistent computations with CRDTs Lasp:一种使用crdt进行分布式、最终一致计算的语言
Christopher S. Meiklejohn, P. V. Roy
We propose Lasp, a novel programming model aimed to simplify correct, large-scale, distributed programming. Lasp leverages ideas from distributed dataflow programming extended with convergent data types. This provides support for computations where not all participants are online together at a given moment through Lasp's "convergent by design" applications. Lasp provides a familiar functional programming semantics, built on top of distributed systems infrastructure, targeted at the Erlang runtime system. The initial Lasp design presented in this report supports synchronization free programming using convergent data types. It combines the expressiveness of these data types together with powerful primitives for composing them. This design lets us write long-lived fault-tolerant distributed applications with non-monotonic behavior. We show how to implement one nontrivial large-scale application, the ad counter scenario from the SyncFree project.
我们提出Lasp,一种新颖的编程模型,旨在简化正确的、大规模的、分布式的编程。Lasp利用了扩展了聚合数据类型的分布式数据流编程的思想。通过Lasp的“设计聚合”应用程序,这为并非所有参与者在给定时刻都在线的计算提供了支持。Lasp提供了一种熟悉的函数式编程语义,构建在分布式系统基础设施之上,针对Erlang运行时系统。本报告中介绍的初始Lasp设计支持使用聚合数据类型的无同步编程。它将这些数据类型的表达性与用于组合它们的强大原语结合在一起。这种设计使我们能够编写具有非单调行为的长寿命容错分布式应用程序。我们将展示如何实现一个重要的大规模应用程序,即SyncFree项目中的广告计数器场景。
{"title":"Lasp: a language for distributed, eventually consistent computations with CRDTs","authors":"Christopher S. Meiklejohn, P. V. Roy","doi":"10.1145/2745947.2745954","DOIUrl":"https://doi.org/10.1145/2745947.2745954","url":null,"abstract":"We propose Lasp, a novel programming model aimed to simplify correct, large-scale, distributed programming. Lasp leverages ideas from distributed dataflow programming extended with convergent data types. This provides support for computations where not all participants are online together at a given moment through Lasp's \"convergent by design\" applications. Lasp provides a familiar functional programming semantics, built on top of distributed systems infrastructure, targeted at the Erlang runtime system. The initial Lasp design presented in this report supports synchronization free programming using convergent data types. It combines the expressiveness of these data types together with powerful primitives for composing them. This design lets us write long-lived fault-tolerant distributed applications with non-monotonic behavior. We show how to implement one nontrivial large-scale application, the ad counter scenario from the SyncFree project.","PeriodicalId":332245,"journal":{"name":"Proceedings of the First Workshop on Principles and Practice of Consistency for Distributed Data","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2015-04-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134109537","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 16
期刊
Proceedings of the First Workshop on Principles and Practice of Consistency for Distributed Data
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1