Big(ger) sets: decomposed delta CRDT sets in Riak

R. Brown, Zeeshan Ali Lakhani, P. Place
{"title":"Big(ger) sets: decomposed delta CRDT sets in Riak","authors":"R. Brown, Zeeshan Ali Lakhani, P. Place","doi":"10.1145/2911151.2911156","DOIUrl":null,"url":null,"abstract":"CRDT[24] Sets as implemented in Riak[6] perform poorly for writes, both as cardinality grows, and for sets larger than 500KB[25]. Riak users wish to create high cardinality CRDT sets, and expect better than O(n) performance for individual insert and remove operations. By decomposing a CRDT set on disk, and employing delta-replication[2], we can achieve far better performance than just delta replication alone: relative to the size of causal metadata, not the cardinality of the set, and we can support sets that are 100s times the size of Riak sets, while still providing the same level of consistency. There is a trade-off in read performance but we expect it is mitigated by enabling queries on sets.","PeriodicalId":259835,"journal":{"name":"Proceedings of the 2nd Workshop on the Principles and Practice of Consistency for Distributed Data","volume":"19 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2016-04-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"7","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 2nd Workshop on the Principles and Practice of Consistency for Distributed Data","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/2911151.2911156","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 7

Abstract

CRDT[24] Sets as implemented in Riak[6] perform poorly for writes, both as cardinality grows, and for sets larger than 500KB[25]. Riak users wish to create high cardinality CRDT sets, and expect better than O(n) performance for individual insert and remove operations. By decomposing a CRDT set on disk, and employing delta-replication[2], we can achieve far better performance than just delta replication alone: relative to the size of causal metadata, not the cardinality of the set, and we can support sets that are 100s times the size of Riak sets, while still providing the same level of consistency. There is a trade-off in read performance but we expect it is mitigated by enabling queries on sets.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
大(ger)集:Riak中分解的δ CRDT集
在Riak[6]中实现的CRDT[24]集在写操作方面表现不佳,无论是基数增长还是大于500KB[25]的集。Riak用户希望创建高基数的CRDT集,并期望单个插入和删除操作的性能优于0 (n)。通过分解磁盘上的CRDT集,并使用增量复制[2],我们可以获得比单独增量复制好得多的性能:相对于因果元数据的大小,而不是集合的基数,我们可以支持比Riak集大100倍的集合,同时仍然提供相同级别的一致性。这在读性能上是有代价的,但我们希望通过启用对集合的查询来减轻这种代价。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
The problem with embedded CRDT counters and a solution Δ-CRDTs: making δ-CRDTs delta-based Serializable eventual consistency: consistency through object method replay The CISE tool: proving weakly-consistent applications correct Proceedings of the 2nd Workshop on the Principles and Practice of Consistency for Distributed Data
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1