首页 > 最新文献

Proceedings of the Twenty-Third ACM Symposium on Operating Systems Principles最新文献

英文 中文
Thialfi: a client notification service for internet-scale applications Thialfi:用于互联网规模应用程序的客户端通知服务
Pub Date : 2011-10-23 DOI: 10.1145/2043556.2043570
A. Adya, Gregory Cooper, Daniel S. Myers, M. Piatek
Ensuring the freshness of client data is a fundamental problem for applications that rely on cloud infrastructure to store data and mediate sharing. Thialfi is a notification service developed at Google to simplify this task. Thialfi supports applications written in multiple programming languages and running on multiple platforms, e.g., browsers, phones, and desktops. Applications register their interest in a set of shared objects and receive notifications when those objects change. Thialfi servers run in multiple Google data centers for availability and replicate their state asynchronously. Thialfi's approach to recovery emphasizes simplicity: all server state is soft, and clients drive recovery and assist in replication. A principal goal of our design is to provide a straightforward API and good semantics despite a variety of failures, including server crashes, communication failures, storage unavailability, and data center failures. Evaluation of live deployments confirms that Thialfi is scalable, efficient, and robust. In production use, Thialfi has scaled to millions of users and delivers notifications with an average delay of less than one second.
对于依赖云基础设施来存储数据和协调共享的应用程序来说,确保客户端数据的新鲜度是一个基本问题。Thialfi是b谷歌开发的一个通知服务,用于简化此任务。Thialfi支持用多种编程语言编写并在多种平台上运行的应用程序,例如浏览器、手机和桌面。应用程序在一组共享对象中注册它们感兴趣的对象,并在这些对象更改时接收通知。Thialfi服务器在多个谷歌数据中心中运行以获得可用性,并异步复制它们的状态。Thialfi的恢复方法强调简单性:所有服务器状态都是软的,客户机驱动恢复并协助复制。我们设计的主要目标是在各种故障(包括服务器崩溃、通信故障、存储不可用和数据中心故障)的情况下提供简单的API和良好的语义。对实际部署的评估证实,Thialfi具有可扩展性、高效性和健壮性。在生产使用中,Thialfi已经扩展到数百万用户,并且以不到一秒的平均延迟发送通知。
{"title":"Thialfi: a client notification service for internet-scale applications","authors":"A. Adya, Gregory Cooper, Daniel S. Myers, M. Piatek","doi":"10.1145/2043556.2043570","DOIUrl":"https://doi.org/10.1145/2043556.2043570","url":null,"abstract":"Ensuring the freshness of client data is a fundamental problem for applications that rely on cloud infrastructure to store data and mediate sharing. Thialfi is a notification service developed at Google to simplify this task. Thialfi supports applications written in multiple programming languages and running on multiple platforms, e.g., browsers, phones, and desktops. Applications register their interest in a set of shared objects and receive notifications when those objects change. Thialfi servers run in multiple Google data centers for availability and replicate their state asynchronously. Thialfi's approach to recovery emphasizes simplicity: all server state is soft, and clients drive recovery and assist in replication. A principal goal of our design is to provide a straightforward API and good semantics despite a variety of failures, including server crashes, communication failures, storage unavailability, and data center failures. Evaluation of live deployments confirms that Thialfi is scalable, efficient, and robust. In production use, Thialfi has scaled to millions of users and delivers notifications with an average delay of less than one second.","PeriodicalId":20672,"journal":{"name":"Proceedings of the Twenty-Third ACM Symposium on Operating Systems Principles","volume":"6 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2011-10-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78687414","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 41
SILT: a memory-efficient, high-performance key-value store 淤泥:一个内存高效,高性能的键值存储
Pub Date : 2011-10-23 DOI: 10.1145/2043556.2043558
Hyeontaek Lim, Bin Fan, D. Andersen, M. Kaminsky
SILT (Small Index Large Table) is a memory-efficient, high-performance key-value store system based on flash storage that scales to serve billions of key-value items on a single node. It requires only 0.7 bytes of DRAM per entry and retrieves key/value pairs using on average 1.01 flash reads each. SILT combines new algorithmic and systems techniques to balance the use of memory, storage, and computation. Our contributions include: (1) the design of three basic key-value stores each with a different emphasis on memory-efficiency and write-friendliness; (2) synthesis of the basic key-value stores to build a SILT key-value store system; and (3) an analytical model for tuning system parameters carefully to meet the needs of different workloads. SILT requires one to two orders of magnitude less memory to provide comparable throughput to current high-performance key-value systems on a commodity desktop system with flash storage.
淤泥(Small Index Large Table)是一种内存高效、高性能的键值存储系统,基于flash存储,可扩展到在单个节点上服务数十亿个键值项。它每个条目只需要0.7字节的DRAM,并且每次平均使用1.01次闪存读取来检索键/值对。淤泥结合了新的算法和系统技术来平衡内存、存储和计算的使用。我们的贡献包括:(1)设计了三个基本的键值存储,每个存储对内存效率和写友好性的强调不同;(2)综合基本键值存储,构建一个淤泥键值存储系统;(3)建立分析模型,对系统参数进行精心调整,以满足不同工作负载的需要。淤泥需要少一到两个数量级的内存,以提供与当前具有闪存的商用桌面系统上的高性能键值系统相当的吞吐量。
{"title":"SILT: a memory-efficient, high-performance key-value store","authors":"Hyeontaek Lim, Bin Fan, D. Andersen, M. Kaminsky","doi":"10.1145/2043556.2043558","DOIUrl":"https://doi.org/10.1145/2043556.2043558","url":null,"abstract":"SILT (Small Index Large Table) is a memory-efficient, high-performance key-value store system based on flash storage that scales to serve billions of key-value items on a single node. It requires only 0.7 bytes of DRAM per entry and retrieves key/value pairs using on average 1.01 flash reads each. SILT combines new algorithmic and systems techniques to balance the use of memory, storage, and computation. Our contributions include: (1) the design of three basic key-value stores each with a different emphasis on memory-efficiency and write-friendliness; (2) synthesis of the basic key-value stores to build a SILT key-value store system; and (3) an analytical model for tuning system parameters carefully to meet the needs of different workloads. SILT requires one to two orders of magnitude less memory to provide comparable throughput to current high-performance key-value systems on a commodity desktop system with flash storage.","PeriodicalId":20672,"journal":{"name":"Proceedings of the Twenty-Third ACM Symposium on Operating Systems Principles","volume":"45 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2011-10-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73624295","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 330
Breaking up is hard to do: security and functionality in a commodity hypervisor 在商品管理程序中拆分安全性和功能性是很难做到的
Pub Date : 2011-10-23 DOI: 10.1145/2043556.2043575
Patrick Colp, Mihir Nanavati, Jun Zhu, W. Aiello, George Coker, T. Deegan, Peter Loscocco, A. Warfield
Cloud computing uses virtualization to lease small slices of large-scale datacenter facilities to individual paying customers. These multi-tenant environments, on which numerous large and popular web-based applications run today, are founded on the belief that the virtualization platform is sufficiently secure to prevent breaches of isolation between different users who are co-located on the same host. Hypervisors are believed to be trustworthy in this role because of their small size and narrow interfaces. We observe that despite the modest footprint of the hypervisor itself, these platforms have a large aggregate trusted computing base (TCB) that includes a monolithic control VM with numerous interfaces exposed to VMs. We present Xoar, a modified version of Xen that retrofits the modularity and isolation principles used in micro-kernels onto a mature virtualization platform. Xoar breaks the control VM into single-purpose components called service VMs. We show that this componentized abstraction brings a number of benefits: sharing of service components by guests is configurable and auditable, making exposure to risk explicit, and access to the hypervisor is restricted to the least privilege required for each component. Microrebooting components at configurable frequencies reduces the temporal attack surface of individual components. Our approach incurs little performance overhead, and does not require functionality to be sacrificed or components to be rewritten from scratch.
云计算使用虚拟化将大型数据中心设施的一小部分租给个人付费客户。这些多租户环境(许多大型和流行的基于web的应用程序都在其中运行)是建立在这样一种信念之上的:虚拟化平台足够安全,可以防止位于同一主机上的不同用户之间的隔离被破坏。在这个角色中,管理程序被认为是值得信赖的,因为它们的尺寸小,接口窄。我们观察到,尽管虚拟机管理程序本身的占用空间不大,但这些平台有一个庞大的聚合可信计算基础(TCB),其中包括一个单片控制VM,并向VM公开了许多接口。我们提出xar, Xen的修改版本,它将微内核中使用的模块化和隔离原则改造到成熟的虚拟化平台上。Xoar将控制虚拟机分解为称为服务虚拟机的单一用途组件。我们展示了这种组件化抽象带来了许多好处:来宾共享服务组件是可配置和可审计的,使风险暴露显式,并且对管理程序的访问被限制为每个组件所需的最小权限。在可配置频率下的微重启组件减少了单个组件的时间攻击面。我们的方法只带来很少的性能开销,并且不需要牺牲功能或从头重写组件。
{"title":"Breaking up is hard to do: security and functionality in a commodity hypervisor","authors":"Patrick Colp, Mihir Nanavati, Jun Zhu, W. Aiello, George Coker, T. Deegan, Peter Loscocco, A. Warfield","doi":"10.1145/2043556.2043575","DOIUrl":"https://doi.org/10.1145/2043556.2043575","url":null,"abstract":"Cloud computing uses virtualization to lease small slices of large-scale datacenter facilities to individual paying customers. These multi-tenant environments, on which numerous large and popular web-based applications run today, are founded on the belief that the virtualization platform is sufficiently secure to prevent breaches of isolation between different users who are co-located on the same host. Hypervisors are believed to be trustworthy in this role because of their small size and narrow interfaces. We observe that despite the modest footprint of the hypervisor itself, these platforms have a large aggregate trusted computing base (TCB) that includes a monolithic control VM with numerous interfaces exposed to VMs. We present Xoar, a modified version of Xen that retrofits the modularity and isolation principles used in micro-kernels onto a mature virtualization platform. Xoar breaks the control VM into single-purpose components called service VMs. We show that this componentized abstraction brings a number of benefits: sharing of service components by guests is configurable and auditable, making exposure to risk explicit, and access to the hypervisor is restricted to the least privilege required for each component. Microrebooting components at configurable frequencies reduces the temporal attack surface of individual components. Our approach incurs little performance overhead, and does not require functionality to be sacrificed or components to be rewritten from scratch.","PeriodicalId":20672,"journal":{"name":"Proceedings of the Twenty-Third ACM Symposium on Operating Systems Principles","volume":"490 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2011-10-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79996790","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 160
Session details: Virtualization 会话细节:虚拟化
G. Heiser
{"title":"Session details: Virtualization","authors":"G. Heiser","doi":"10.1145/3247976","DOIUrl":"https://doi.org/10.1145/3247976","url":null,"abstract":"","PeriodicalId":20672,"journal":{"name":"Proceedings of the Twenty-Third ACM Symposium on Operating Systems Principles","volume":"179 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2011-10-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76314491","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Session details: Detection and tracing 会话详细信息:检测和跟踪
R. Isaacs
{"title":"Session details: Detection and tracing","authors":"R. Isaacs","doi":"10.1145/3247978","DOIUrl":"https://doi.org/10.1145/3247978","url":null,"abstract":"","PeriodicalId":20672,"journal":{"name":"Proceedings of the Twenty-Third ACM Symposium on Operating Systems Principles","volume":"98 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2011-10-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85378894","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Differentiated storage services 差异化存储服务
Pub Date : 2011-10-23 DOI: 10.1145/2043556.2043563
M. Mesnier, Feng Chen, Tian Luo, Jason B. Akers
We propose an I/O classification architecture to close the widening semantic gap between computer systems and storage systems. By classifying I/O, a computer system can request that different classes of data be handled with different storage system policies. Specifically, when a storage system is first initialized, we assign performance policies to predefined classes, such as the filesystem journal. Then, online, we include a classifier with each I/O command (e.g., SCSI), thereby allowing the storage system to enforce the associated policy for each I/O that it receives. Our immediate application is caching. We present filesystem prototypes and a database proof-of-concept that classify all disk I/O --- with very little modification to the filesystem, database, and operating system. We associate caching policies with various classes (e.g., large files shall be evicted before metadata and small files), and we show that end-to-end file system performance can be improved by over a factor of two, relative to conventional caches like LRU. And caching is simply one of many possible applications. As part of our ongoing work, we are exploring other classes, policies and storage system mechanisms that can be used to improve end-to-end performance, reliability and security.
我们提出了一种I/O分类架构来缩小计算机系统和存储系统之间不断扩大的语义差距。通过对I/O进行分类,计算机系统可以要求用不同的存储系统策略处理不同类型的数据。具体来说,当存储系统首次初始化时,我们将性能策略分配给预定义的类,例如文件系统日志。然后,在线上,我们为每个I/O命令(例如SCSI)包含一个分类器,从而允许存储系统对它接收的每个I/O强制执行相关策略。我们当前的应用程序是缓存。我们给出了文件系统原型和数据库概念验证,它们对所有磁盘I/O进行了分类——对文件系统、数据库和操作系统进行了很少的修改。我们将缓存策略与各种类相关联(例如,大文件应该在元数据和小文件之前被驱逐),并且我们表明,相对于LRU等传统缓存,端到端文件系统的性能可以提高两倍以上。缓存只是许多可能的应用程序之一。作为我们正在进行的工作的一部分,我们正在探索可用于改进端到端性能、可靠性和安全性的其他类、策略和存储系统机制。
{"title":"Differentiated storage services","authors":"M. Mesnier, Feng Chen, Tian Luo, Jason B. Akers","doi":"10.1145/2043556.2043563","DOIUrl":"https://doi.org/10.1145/2043556.2043563","url":null,"abstract":"We propose an I/O classification architecture to close the widening semantic gap between computer systems and storage systems. By classifying I/O, a computer system can request that different classes of data be handled with different storage system policies. Specifically, when a storage system is first initialized, we assign performance policies to predefined classes, such as the filesystem journal. Then, online, we include a classifier with each I/O command (e.g., SCSI), thereby allowing the storage system to enforce the associated policy for each I/O that it receives. Our immediate application is caching. We present filesystem prototypes and a database proof-of-concept that classify all disk I/O --- with very little modification to the filesystem, database, and operating system. We associate caching policies with various classes (e.g., large files shall be evicted before metadata and small files), and we show that end-to-end file system performance can be improved by over a factor of two, relative to conventional caches like LRU. And caching is simply one of many possible applications. As part of our ongoing work, we are exploring other classes, policies and storage system mechanisms that can be used to improve end-to-end performance, reliability and security.","PeriodicalId":20672,"journal":{"name":"Proceedings of the Twenty-Third ACM Symposium on Operating Systems Principles","volume":"61 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2011-10-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"91283485","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 91
Detecting failures in distributed systems with the Falcon spy network 用猎鹰间谍网络检测分布式系统的故障
Pub Date : 2011-10-23 DOI: 10.1145/2043556.2043583
Joshua B. Leners, Hao-Che Wu, W. Hung, M. Aguilera, Michael Walfish
A common way for a distributed system to tolerate crashes is to explicitly detect them and then recover from them. Interestingly, detection can take much longer than recovery, as a result of many advances in recovery techniques, making failure detection the dominant factor in these systems' unavailability when a crash occurs. This paper presents the design, implementation, and evaluation of Falcon, a failure detector with several features. First, Falcon's common-case detection time is sub-second, which keeps unavailability low. Second, Falcon is reliable: it never reports a process as down when it is actually up. Third, Falcon sometimes kills to achieve reliable detection but aims to kill the smallest needed component. Falcon achieves these features by coordinating a network of spies, each monitoring a layer of the system. Falcon's main cost is a small amount of platform-specific logic. Falcon is thus the first failure detector that is fast, reliable, and viable. As such, it could change the way that a class of distributed systems is built.
分布式系统容忍崩溃的一种常见方法是显式地检测它们,然后从中恢复。有趣的是,由于恢复技术的许多进步,检测所需的时间可能比恢复要长得多,这使得故障检测成为发生崩溃时这些系统不可用的主要因素。本文介绍了Falcon的设计、实现和评估,这是一个具有几个特点的故障检测器。首先,法尔肯的常见病例检测时间是亚秒级,这使得不可用性很低。其次,Falcon是可靠的:当进程实际运行时,它从不将其报告为关闭。第三,“猎鹰”有时是为了实现可靠的探测而杀人,但目的是杀死所需的最小部件。福尔肯通过协调一个间谍网络来实现这些功能,每个间谍网络监控系统的一个层。Falcon的主要成本是少量平台特定的逻辑。因此,Falcon是第一个快速、可靠和可行的故障探测器。因此,它可以改变一类分布式系统的构建方式。
{"title":"Detecting failures in distributed systems with the Falcon spy network","authors":"Joshua B. Leners, Hao-Che Wu, W. Hung, M. Aguilera, Michael Walfish","doi":"10.1145/2043556.2043583","DOIUrl":"https://doi.org/10.1145/2043556.2043583","url":null,"abstract":"A common way for a distributed system to tolerate crashes is to explicitly detect them and then recover from them. Interestingly, detection can take much longer than recovery, as a result of many advances in recovery techniques, making failure detection the dominant factor in these systems' unavailability when a crash occurs. This paper presents the design, implementation, and evaluation of Falcon, a failure detector with several features. First, Falcon's common-case detection time is sub-second, which keeps unavailability low. Second, Falcon is reliable: it never reports a process as down when it is actually up. Third, Falcon sometimes kills to achieve reliable detection but aims to kill the smallest needed component. Falcon achieves these features by coordinating a network of spies, each monitoring a layer of the system. Falcon's main cost is a small amount of platform-specific logic. Falcon is thus the first failure detector that is fast, reliable, and viable. As such, it could change the way that a class of distributed systems is built.","PeriodicalId":20672,"journal":{"name":"Proceedings of the Twenty-Third ACM Symposium on Operating Systems Principles","volume":"30 8 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2011-10-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78178222","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 107
Don't settle for eventual: scalable causal consistency for wide-area storage with COPS 不要满足于使用cop的广域存储的最终可伸缩因果一致性
Pub Date : 2011-10-23 DOI: 10.1145/2043556.2043593
Wyatt Lloyd, M. Freedman, M. Kaminsky, D. Andersen
Geo-replicated, distributed data stores that support complex online applications, such as social networks, must provide an "always-on" experience where operations always complete with low latency. Today's systems often sacrifice strong consistency to achieve these goals, exposing inconsistencies to their clients and necessitating complex application logic. In this paper, we identify and define a consistency model---causal consistency with convergent conflict handling, or causal+---that is the strongest achieved under these constraints. We present the design and implementation of COPS, a key-value store that delivers this consistency model across the wide-area. A key contribution of COPS is its scalability, which can enforce causal dependencies between keys stored across an entire cluster, rather than a single server like previous systems. The central approach in COPS is tracking and explicitly checking whether causal dependencies between keys are satisfied in the local cluster before exposing writes. Further, in COPS-GT, we introduce get transactions in order to obtain a consistent view of multiple keys without locking or blocking. Our evaluation shows that COPS completes operations in less than a millisecond, provides throughput similar to previous systems when using one server per cluster, and scales well as we increase the number of servers in each cluster. It also shows that COPS-GT provides similar latency, throughput, and scaling to COPS for common workloads.
支持复杂在线应用程序(如社交网络)的地理复制、分布式数据存储必须提供“始终在线”的体验,在这种体验中,操作总是以低延迟完成。今天的系统经常牺牲强一致性来实现这些目标,将不一致性暴露给客户,并需要复杂的应用程序逻辑。在本文中,我们识别并定义了一个一致性模型——具有收敛冲突处理的因果一致性,或因果+——这是在这些约束下实现的最强模型。我们介绍了COPS的设计和实现,这是一个键值存储,可以在广域范围内提供这种一致性模型。COPS的一个关键贡献是它的可伸缩性,它可以强制存储在整个集群(而不是像以前的系统那样存储在单个服务器上)的密钥之间的因果依赖关系。cop的核心方法是在公开写操作之前,跟踪并显式检查本地集群中键之间的因果关系是否得到满足。此外,在cop - gt中,我们引入了get事务,以便在不锁定或阻塞的情况下获得多个键的一致视图。我们的评估表明,COPS在不到一毫秒的时间内完成操作,在每个集群使用一台服务器时提供与以前系统类似的吞吐量,并且随着每个集群中服务器数量的增加而扩展良好。它还表明,cop - gt为普通工作负载提供了与cop相似的延迟、吞吐量和可伸缩性。
{"title":"Don't settle for eventual: scalable causal consistency for wide-area storage with COPS","authors":"Wyatt Lloyd, M. Freedman, M. Kaminsky, D. Andersen","doi":"10.1145/2043556.2043593","DOIUrl":"https://doi.org/10.1145/2043556.2043593","url":null,"abstract":"Geo-replicated, distributed data stores that support complex online applications, such as social networks, must provide an \"always-on\" experience where operations always complete with low latency. Today's systems often sacrifice strong consistency to achieve these goals, exposing inconsistencies to their clients and necessitating complex application logic. In this paper, we identify and define a consistency model---causal consistency with convergent conflict handling, or causal+---that is the strongest achieved under these constraints. We present the design and implementation of COPS, a key-value store that delivers this consistency model across the wide-area. A key contribution of COPS is its scalability, which can enforce causal dependencies between keys stored across an entire cluster, rather than a single server like previous systems. The central approach in COPS is tracking and explicitly checking whether causal dependencies between keys are satisfied in the local cluster before exposing writes. Further, in COPS-GT, we introduce get transactions in order to obtain a consistent view of multiple keys without locking or blocking. Our evaluation shows that COPS completes operations in less than a millisecond, provides throughput similar to previous systems when using one server per cluster, and scales well as we increase the number of servers in each cluster. It also shows that COPS-GT provides similar latency, throughput, and scaling to COPS for common workloads.","PeriodicalId":20672,"journal":{"name":"Proceedings of the Twenty-Third ACM Symposium on Operating Systems Principles","volume":"9 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2011-10-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75398796","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 662
Session details: Storage 会话详细信息:
E. Kohler
{"title":"Session details: Storage","authors":"E. Kohler","doi":"10.1145/3247973","DOIUrl":"https://doi.org/10.1145/3247973","url":null,"abstract":"","PeriodicalId":20672,"journal":{"name":"Proceedings of the Twenty-Third ACM Symposium on Operating Systems Principles","volume":"63 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2011-10-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75825459","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Intrusion recovery for database-backed web applications 数据库支持的web应用程序的入侵恢复
Pub Date : 2011-10-23 DOI: 10.1145/2043556.2043567
Ramesh Chandra, Taesoo Kim, Meelap Shah, Neha Narula, N. Zeldovich
Warp is a system that helps users and administrators of web applications recover from intrusions such as SQL injection, cross-site scripting, and clickjacking attacks, while preserving legitimate user changes. Warp repairs from an intrusion by rolling back parts of the database to a version before the attack, and replaying subsequent legitimate actions. Warp allows administrators to retroactively patch security vulnerabilities---i.e., apply new security patches to past executions---to recover from intrusions without requiring the administrator to track down or even detect attacks. Warp's time-travel database allows fine-grained rollback of database rows, and enables repair to proceed concurrently with normal operation of a web application. Finally, Warp captures and replays user input at the level of a browser's DOM, to recover from attacks that involve a user's browser. For a web server running MediaWiki, Warp requires no application source code changes to recover from a range of common web application vulnerabilities with minimal user input at a cost of 24--27% in throughput and 2--3.2 GB/day in storage.
Warp是一个系统,可以帮助web应用程序的用户和管理员从SQL注入、跨站点脚本和点击劫持攻击等入侵中恢复,同时保留合法的用户更改。Warp通过回滚部分数据库到攻击前的版本,并重播随后的合法操作来修复入侵。Warp允许管理员追溯性地修补安全漏洞。,对过去的执行应用新的安全补丁——从入侵中恢复,而不需要管理员跟踪甚至检测攻击。Warp的时间旅行数据库允许对数据库行进行细粒度的回滚,并使修复与web应用程序的正常操作同时进行。最后,Warp在浏览器DOM级别捕获并重放用户输入,以便从涉及用户浏览器的攻击中恢复。对于运行MediaWiki的web服务器,Warp不需要更改应用程序源代码就可以从一系列常见的web应用程序漏洞中恢复,只需最少的用户输入,吞吐量降低24- 27%,存储空间降低2- 3.2 GB/天。
{"title":"Intrusion recovery for database-backed web applications","authors":"Ramesh Chandra, Taesoo Kim, Meelap Shah, Neha Narula, N. Zeldovich","doi":"10.1145/2043556.2043567","DOIUrl":"https://doi.org/10.1145/2043556.2043567","url":null,"abstract":"Warp is a system that helps users and administrators of web applications recover from intrusions such as SQL injection, cross-site scripting, and clickjacking attacks, while preserving legitimate user changes. Warp repairs from an intrusion by rolling back parts of the database to a version before the attack, and replaying subsequent legitimate actions. Warp allows administrators to retroactively patch security vulnerabilities---i.e., apply new security patches to past executions---to recover from intrusions without requiring the administrator to track down or even detect attacks. Warp's time-travel database allows fine-grained rollback of database rows, and enables repair to proceed concurrently with normal operation of a web application. Finally, Warp captures and replays user input at the level of a browser's DOM, to recover from attacks that involve a user's browser. For a web server running MediaWiki, Warp requires no application source code changes to recover from a range of common web application vulnerabilities with minimal user input at a cost of 24--27% in throughput and 2--3.2 GB/day in storage.","PeriodicalId":20672,"journal":{"name":"Proceedings of the Twenty-Third ACM Symposium on Operating Systems Principles","volume":"1 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2011-10-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75116172","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 57
期刊
Proceedings of the Twenty-Third ACM Symposium on Operating Systems Principles
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1