首页 > 最新文献

Proceedings of the ... ACM Symposium on Cloud Computing [electronic resource] : SOCC ... ... SoCC (Conference)最新文献

英文 中文
Latency-Aware Dynamic Server and Cooling Capacity Provisioner for Data Centers 数据中心的延迟感知动态服务器和冷却能力提供程序
Anuroop Desu, Udaya L. N. Puvvadi, Tyler Stachecki, Sagar Vishwakarma, Sadegh Khalili, K. Ghose, B. Sammakia
Data center operators generally overprovision IT and cooling capacities to address unexpected utilization increases that can violate service quality commitments. This results in energy wastage. To reduce this wastage, we introduce HCP (Holistic Capacity Provisioner), a service latency aware management system for dynamically provisioning the server and cooling capacity. Short-term load prediction is used to adjust the online server capacity to concentrate the workload onto the smallest possible set of online servers. Idling servers are completely turned off based on a separate long-term utilization predictor. HCP targets data centers that use chilled air cooling and varies the cooling provided commensurately, using adjustable aperture tiles and speed control of the blower fans in the air handler. An HCP prototype supporting a server heterogeneity is evaluated with real-world workload traces/requests and realizes up to 32% total energy savings while limiting the 99th-percentile and average latency increases to at most 6.67% and 3.24%, respectively, against a baseline system where all servers are kept online.
数据中心运营商通常会过度配置IT和冷却能力,以应对可能违反服务质量承诺的意外利用率增长。这就造成了能源的浪费。为了减少这种浪费,我们引入了HCP(整体容量提供器),这是一个服务延迟感知管理系统,用于动态提供服务器和冷却容量。短期负载预测用于调整在线服务器容量,以便将工作负载集中到尽可能小的在线服务器集上。根据单独的长期利用率预测器完全关闭空闲服务器。HCP的目标是使用冷冻空气冷却的数据中心,并相应地改变提供的冷却,使用可调节的孔径砖和空气处理器中的鼓风机的速度控制。支持服务器异构的HCP原型使用真实的工作负载跟踪/请求进行评估,实现了高达32%的总能源节约,同时将第99百分位和平均延迟分别限制在6.67%和3.24%,而基线系统中所有服务器都保持在线。
{"title":"Latency-Aware Dynamic Server and Cooling Capacity Provisioner for Data Centers","authors":"Anuroop Desu, Udaya L. N. Puvvadi, Tyler Stachecki, Sagar Vishwakarma, Sadegh Khalili, K. Ghose, B. Sammakia","doi":"10.1145/3472883.3487015","DOIUrl":"https://doi.org/10.1145/3472883.3487015","url":null,"abstract":"Data center operators generally overprovision IT and cooling capacities to address unexpected utilization increases that can violate service quality commitments. This results in energy wastage. To reduce this wastage, we introduce HCP (Holistic Capacity Provisioner), a service latency aware management system for dynamically provisioning the server and cooling capacity. Short-term load prediction is used to adjust the online server capacity to concentrate the workload onto the smallest possible set of online servers. Idling servers are completely turned off based on a separate long-term utilization predictor. HCP targets data centers that use chilled air cooling and varies the cooling provided commensurately, using adjustable aperture tiles and speed control of the blower fans in the air handler. An HCP prototype supporting a server heterogeneity is evaluated with real-world workload traces/requests and realizes up to 32% total energy savings while limiting the 99th-percentile and average latency increases to at most 6.67% and 3.24%, respectively, against a baseline system where all servers are kept online.","PeriodicalId":91949,"journal":{"name":"Proceedings of the ... ACM Symposium on Cloud Computing [electronic resource] : SOCC ... ... SoCC (Conference)","volume":"34 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2021-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86954190","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
On Merits and Viability of Multi-Cloud Serverless 多云无服务器的优点和可行性
A. F. Baarzi, G. Kesidis, Carlee Joe-Wong, Mohammad Shahrad
Serverless computing is a rapidly growing paradigm in the cloud industry that envisions functions as the computational building blocks of an application. Instead of forcing the application developer to provision cloud resources for their application, the cloud provider provisions the required resources for each function "under the hood." In this work, we envision virtual serverless providers (VSPs) to aggregate serverless offerings. In doing so, VSPs allow developers (and businesses) to get rid of vendor lock-in problems and exploit pricing and performance variation across providers by adaptively utilizing the best provider at each time, forcing the providers to compete to offer cheaper and superior services. We discuss the merits of a VSP and show that serverless systems are well-suited to cross-provider aggregation, compared to virtual machines. We propose a VSP system architecture and implement an initial version. Using experimental evaluations, our preliminary results show that a VSP can improve maximum sustained throughput by 1.2x to 4.2x, reduces SLO violations by 98.8%, and improves the total invocations' costs by 54%.
无服务器计算是云计算行业中一个快速发展的范例,它将功能设想为应用程序的计算构建块。云提供商没有强迫应用程序开发人员为其应用程序提供云资源,而是为“底层”的每个功能提供所需的资源。在这项工作中,我们设想虚拟无服务器提供商(vsp)来聚合无服务器产品。通过这样做,vsp允许开发人员(和企业)摆脱供应商锁定问题,并通过每次自适应地利用最佳提供商来利用供应商之间的定价和性能差异,迫使提供商竞争提供更便宜和更优质的服务。我们讨论了VSP的优点,并表明与虚拟机相比,无服务器系统非常适合跨提供者聚合。我们提出了一个VSP系统架构,并实现了一个初始版本。通过实验评估,我们的初步结果表明,VSP可以将最大持续吞吐量提高1.2倍到4.2倍,将SLO违规减少98.8%,并将总调用成本提高54%。
{"title":"On Merits and Viability of Multi-Cloud Serverless","authors":"A. F. Baarzi, G. Kesidis, Carlee Joe-Wong, Mohammad Shahrad","doi":"10.1145/3472883.3487002","DOIUrl":"https://doi.org/10.1145/3472883.3487002","url":null,"abstract":"Serverless computing is a rapidly growing paradigm in the cloud industry that envisions functions as the computational building blocks of an application. Instead of forcing the application developer to provision cloud resources for their application, the cloud provider provisions the required resources for each function \"under the hood.\" In this work, we envision virtual serverless providers (VSPs) to aggregate serverless offerings. In doing so, VSPs allow developers (and businesses) to get rid of vendor lock-in problems and exploit pricing and performance variation across providers by adaptively utilizing the best provider at each time, forcing the providers to compete to offer cheaper and superior services. We discuss the merits of a VSP and show that serverless systems are well-suited to cross-provider aggregation, compared to virtual machines. We propose a VSP system architecture and implement an initial version. Using experimental evaluations, our preliminary results show that a VSP can improve maximum sustained throughput by 1.2x to 4.2x, reduces SLO violations by 98.8%, and improves the total invocations' costs by 54%.","PeriodicalId":91949,"journal":{"name":"Proceedings of the ... ACM Symposium on Cloud Computing [electronic resource] : SOCC ... ... SoCC (Conference)","volume":"62 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2021-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87070111","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 15
Parallax 视差
Giorgos Xanthakis, Giorgos Saloustros, Nikos Batsaras, Anastasios Papagiannis, A. Bilas
Key-value (KV) separation is a technique that introduces randomness in the I/O access patterns to reduce I/O amplification in LSM-based key-value stores. KV separation has a significant drawback that makes it less attractive: Delete and update operations in modern workloads result in frequent and expensive garbage collection (GC) in the value log. In this paper, we design and implement Parallax, which proposes hybrid KV placement to reduce GC overhead significantly and increases the benefits of using a log. We first model the benefits of KV separation for different KV pair sizes. We use this model to classify KV pairs in three categories small, medium, and large. Then, Parallax uses different approaches for each KV category: It always places large values in a log and small values in place. For medium values it uses a mixed strategy that combines the benefits of using a log and eliminates GC overhead as follows: It places medium values in a log for all but the last few (typically one or two) levels in the LSM structure, where it performs a full compaction, merges values in place, and reclaims log space without the need for GC. We evaluate Parallax against RocksDB that places all values in place and BlobDB that always performs KV separation. We find that Parallax increases throughput by up to 12.4x and 17.83x, decreases I/O amplification by up to 27.1x and 26x, and increases CPU efficiency by up to 18.7x and 28x, respectively, for all but scan-based YCSB workloads.
{"title":"Parallax","authors":"Giorgos Xanthakis, Giorgos Saloustros, Nikos Batsaras, Anastasios Papagiannis, A. Bilas","doi":"10.1145/3472883.3487012","DOIUrl":"https://doi.org/10.1145/3472883.3487012","url":null,"abstract":"Key-value (KV) separation is a technique that introduces randomness in the I/O access patterns to reduce I/O amplification in LSM-based key-value stores. KV separation has a significant drawback that makes it less attractive: Delete and update operations in modern workloads result in frequent and expensive garbage collection (GC) in the value log. In this paper, we design and implement Parallax, which proposes hybrid KV placement to reduce GC overhead significantly and increases the benefits of using a log. We first model the benefits of KV separation for different KV pair sizes. We use this model to classify KV pairs in three categories small, medium, and large. Then, Parallax uses different approaches for each KV category: It always places large values in a log and small values in place. For medium values it uses a mixed strategy that combines the benefits of using a log and eliminates GC overhead as follows: It places medium values in a log for all but the last few (typically one or two) levels in the LSM structure, where it performs a full compaction, merges values in place, and reclaims log space without the need for GC. We evaluate Parallax against RocksDB that places all values in place and BlobDB that always performs KV separation. We find that Parallax increases throughput by up to 12.4x and 17.83x, decreases I/O amplification by up to 27.1x and 26x, and increases CPU efficiency by up to 18.7x and 28x, respectively, for all but scan-based YCSB workloads.","PeriodicalId":91949,"journal":{"name":"Proceedings of the ... ACM Symposium on Cloud Computing [electronic resource] : SOCC ... ... SoCC (Conference)","volume":"83 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2021-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83818739","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
Atoll 环礁
With user-facing apps adopting serverless computing, good latency performance of serverless platforms has become a strong fundamental requirement. However, it is difficult to achieve this on platforms today due to the design of their underlying control and data planes that are particularly ill-suited to short-lived functions with unpredictable arrival patterns. We present Atoll, a serverless platform, that overcomes the challenges via a ground-up redesign of the control and data planes. In Atoll, each app is associated with a latency deadline. Atoll achieves its per-app request latency goals by: (a) partitioning the cluster into (semi-global scheduler, worker pool) pairs, (b) performing deadline-aware scheduling and proactive sandbox allocation, and (c) using a load balancing layer to do sandbox-aware routing, and automatically scale the semi-global schedulers per app. Our results show that Atoll reduces missed deadlines by ~66x and tail latencies by ~3x compared to state-of-the-art alternatives.
{"title":"Atoll","authors":"","doi":"10.1145/3472883.3486981","DOIUrl":"https://doi.org/10.1145/3472883.3486981","url":null,"abstract":"With user-facing apps adopting serverless computing, good latency performance of serverless platforms has become a strong fundamental requirement. However, it is difficult to achieve this on platforms today due to the design of their underlying control and data planes that are particularly ill-suited to short-lived functions with unpredictable arrival patterns. We present Atoll, a serverless platform, that overcomes the challenges via a ground-up redesign of the control and data planes. In Atoll, each app is associated with a latency deadline. Atoll achieves its per-app request latency goals by: (a) partitioning the cluster into (semi-global scheduler, worker pool) pairs, (b) performing deadline-aware scheduling and proactive sandbox allocation, and (c) using a load balancing layer to do sandbox-aware routing, and automatically scale the semi-global schedulers per app. Our results show that Atoll reduces missed deadlines by ~66x and tail latencies by ~3x compared to state-of-the-art alternatives.","PeriodicalId":91949,"journal":{"name":"Proceedings of the ... ACM Symposium on Cloud Computing [electronic resource] : SOCC ... ... SoCC (Conference)","volume":"76 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2021-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86181303","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 49
George 乔治
Suyi Li, Luping Wang, Wei Wang, Yinghao Yu, Bo Li
{"title":"George","authors":"Suyi Li, Luping Wang, Wei Wang, Yinghao Yu, Bo Li","doi":"10.1163/2589-7993_eeco_sim_036395","DOIUrl":"https://doi.org/10.1163/2589-7993_eeco_sim_036395","url":null,"abstract":"","PeriodicalId":91949,"journal":{"name":"Proceedings of the ... ACM Symposium on Cloud Computing [electronic resource] : SOCC ... ... SoCC (Conference)","volume":"11 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2021-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75077526","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Mind the Gap: Broken Promises of CPU Reservations in Containerized Multi-tenant Clouds 注意差距:容器化多租户云中CPU预留的破碎承诺
Li Liu, Haoliang Wang, An Wang, Mengbai Xiao, Yue Cheng, Songqing Chen
Containerization is becoming increasingly popular, but unfortunately, containers often fail to deliver the anticipated performance with the allocated resources. In this paper, we first demonstrate the performance variance and degradation are significant (by up to 5x) in a multi-tenant environment where containers are co-located. We then investigate the root cause of such performance degradation. Contrary to the common belief that such degradation is caused by resource contention and interference, we find that there is a gap between the amount of CPU a container reserves and actually gets. The root cause lies in the design choices of today's Linux scheduling mechanism, which we call Forced Runqueue Sharing and Phantom CPU Time. In fact, there are fundamental conflicts between the need to reserve CPU resources and Completely Fair Scheduler's work-conserving nature, and this contradiction prevents a container from fully utilizing its requested CPU resources. As a proof-of-concept, we implement a new resource configuration mechanism atop the widely used Kubernetes and Linux to demonstrate its potential benefits and shed light on future scheduler redesign. Our proof-of-concept, compared to the existing scheduler, improves the performance of both batch and interactive containerized apps by up to 5.6x and 13.7x.
容器化正变得越来越流行,但不幸的是,容器经常无法使用分配的资源交付预期的性能。在本文中,我们首先演示了在容器共存的多租户环境中,性能差异和性能下降是显著的(高达5倍)。然后,我们研究这种性能下降的根本原因。与通常认为这种退化是由资源争用和干扰引起的观点相反,我们发现容器预留的CPU数量与实际获得的CPU数量之间存在差距。根本原因在于当今Linux调度机制的设计选择,我们称之为强制运行队列共享和虚拟CPU时间。实际上,预留CPU资源的需求与完全公平调度程序的工作保护特性之间存在着根本的冲突,这种矛盾使容器无法充分利用其请求的CPU资源。作为概念验证,我们在广泛使用的Kubernetes和Linux上实现了一种新的资源配置机制,以展示其潜在的好处,并为未来重新设计调度器提供思路。与现有的调度程序相比,我们的概念验证将批处理和交互式容器化应用程序的性能分别提高了5.6倍和13.7倍。
{"title":"Mind the Gap: Broken Promises of CPU Reservations in Containerized Multi-tenant Clouds","authors":"Li Liu, Haoliang Wang, An Wang, Mengbai Xiao, Yue Cheng, Songqing Chen","doi":"10.1145/3472883.3486997","DOIUrl":"https://doi.org/10.1145/3472883.3486997","url":null,"abstract":"Containerization is becoming increasingly popular, but unfortunately, containers often fail to deliver the anticipated performance with the allocated resources. In this paper, we first demonstrate the performance variance and degradation are significant (by up to 5x) in a multi-tenant environment where containers are co-located. We then investigate the root cause of such performance degradation. Contrary to the common belief that such degradation is caused by resource contention and interference, we find that there is a gap between the amount of CPU a container reserves and actually gets. The root cause lies in the design choices of today's Linux scheduling mechanism, which we call Forced Runqueue Sharing and Phantom CPU Time. In fact, there are fundamental conflicts between the need to reserve CPU resources and Completely Fair Scheduler's work-conserving nature, and this contradiction prevents a container from fully utilizing its requested CPU resources. As a proof-of-concept, we implement a new resource configuration mechanism atop the widely used Kubernetes and Linux to demonstrate its potential benefits and shed light on future scheduler redesign. Our proof-of-concept, compared to the existing scheduler, improves the performance of both batch and interactive containerized apps by up to 5.6x and 13.7x.","PeriodicalId":91949,"journal":{"name":"Proceedings of the ... ACM Symposium on Cloud Computing [electronic resource] : SOCC ... ... SoCC (Conference)","volume":"39 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2021-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74223704","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Networking and Cloud: A Match Made in Heaven 网络和云:天作之合
Aditya Akella
Over the past few years, networking advances have spurred fundamental transformations in cloud computing. Technologies such as software defined networking, network virtualization, and high-bisection fabrics have simplified cloud design and operation, brought exciting new workloads to the cloud, and helped lower the bar to cloud adoption. Networking is poised to bring even more interesting and fundamental transformations to the cloud over the next few years. In this talk, I will describe several promising networking ideas, spanning high-performance fabrics and network stacks, programmable hardware, abstractions for network automation, and novel inter-domain protocols and services. I will discuss the tantalizing opportunities these ideas offer for cloud computing, and the fundamental new research and practical challenges they introduce. I will conclude my talk with observations on what it would take for our research community to make rapid and meaningful progress in this space.
在过去的几年里,网络技术的进步推动了云计算的根本性变革。软件定义网络、网络虚拟化和高分割结构等技术简化了云设计和操作,为云带来了令人兴奋的新工作负载,并帮助降低了云采用的门槛。未来几年,网络将为云计算带来更有趣、更根本的变革。在这次演讲中,我将介绍几个有前途的网络思想,包括高性能结构和网络堆栈,可编程硬件,网络自动化的抽象,以及新的域间协议和服务。我将讨论这些思想为云计算提供的诱人机会,以及它们带来的基础性新研究和实际挑战。最后,我想谈谈我们的研究界如何才能在这个领域取得快速而有意义的进展。
{"title":"Networking and Cloud: A Match Made in Heaven","authors":"Aditya Akella","doi":"10.1145/3472883.3517037","DOIUrl":"https://doi.org/10.1145/3472883.3517037","url":null,"abstract":"Over the past few years, networking advances have spurred fundamental transformations in cloud computing. Technologies such as software defined networking, network virtualization, and high-bisection fabrics have simplified cloud design and operation, brought exciting new workloads to the cloud, and helped lower the bar to cloud adoption. Networking is poised to bring even more interesting and fundamental transformations to the cloud over the next few years. In this talk, I will describe several promising networking ideas, spanning high-performance fabrics and network stacks, programmable hardware, abstractions for network automation, and novel inter-domain protocols and services. I will discuss the tantalizing opportunities these ideas offer for cloud computing, and the fundamental new research and practical challenges they introduce. I will conclude my talk with observations on what it would take for our research community to make rapid and meaningful progress in this space.","PeriodicalId":91949,"journal":{"name":"Proceedings of the ... ACM Symposium on Cloud Computing [electronic resource] : SOCC ... ... SoCC (Conference)","volume":"9 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2021-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74372255","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Secure Namespaced Kernel Audit for Containers 容器的安全命名空间内核审计
S. Lim, Bogdan Stelea, Xueyuan Han, Thomas Pasquier
Despite the wide usage of container-based cloud computing, container auditing for security analysis relies mostly on built-in host audit systems, which often lack the ability to capture high-fidelity container logs. State-of-the-art reference-monitor-based audit techniques greatly improve the quality of audit logs, but their system-wide architecture is too costly to be adapted for individual containers. Moreover, these techniques typically require extensive kernel modifications, making it difficult to deploy in practical settings. In this paper, we present saBPF (secure audit BPF), an extension of the eBPF framework capable of deploying secure system-level audit mechanisms at the container granularity. We demonstrate the practicality of saBPF in Kubernetes by designing an audit framework, an intrusion detection system, and a lightweight access control mechanism. We evaluate saBPF and show that it is comparable in performance and security guarantees to audit systems from the literature that are implemented directly in the kernel.
尽管基于容器的云计算被广泛使用,但用于安全分析的容器审计主要依赖于内置的主机审计系统,而这些系统通常缺乏捕获高保真容器日志的能力。最先进的基于参考监视器的审计技术极大地提高了审计日志的质量,但是它们的系统范围架构成本太高,无法适应于单个容器。此外,这些技术通常需要对内核进行大量修改,因此难以在实际环境中部署。在本文中,我们介绍了saBPF(安全审计BPF),它是eBPF框架的扩展,能够在容器粒度上部署安全的系统级审计机制。我们通过设计一个审计框架、一个入侵检测系统和一个轻量级访问控制机制来演示saBPF在Kubernetes中的实用性。我们对saBPF进行了评估,并表明它在性能和安全保证方面与直接在内核中实现的审计系统相当。
{"title":"Secure Namespaced Kernel Audit for Containers","authors":"S. Lim, Bogdan Stelea, Xueyuan Han, Thomas Pasquier","doi":"10.1145/3472883.3486976","DOIUrl":"https://doi.org/10.1145/3472883.3486976","url":null,"abstract":"Despite the wide usage of container-based cloud computing, container auditing for security analysis relies mostly on built-in host audit systems, which often lack the ability to capture high-fidelity container logs. State-of-the-art reference-monitor-based audit techniques greatly improve the quality of audit logs, but their system-wide architecture is too costly to be adapted for individual containers. Moreover, these techniques typically require extensive kernel modifications, making it difficult to deploy in practical settings. In this paper, we present saBPF (secure audit BPF), an extension of the eBPF framework capable of deploying secure system-level audit mechanisms at the container granularity. We demonstrate the practicality of saBPF in Kubernetes by designing an audit framework, an intrusion detection system, and a lightweight access control mechanism. We evaluate saBPF and show that it is comparable in performance and security guarantees to audit systems from the literature that are implemented directly in the kernel.","PeriodicalId":91949,"journal":{"name":"Proceedings of the ... ACM Symposium on Cloud Computing [electronic resource] : SOCC ... ... SoCC (Conference)","volume":"1 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2021-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88520294","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Open Research Problems in the Cloud 云中的开放研究问题
R. Ramakrishnan, J. Rexford, Hakim Weatherspoon, S. Peter, F. Ozcan, Mehul Shah
Open Research Problems in the Cloud Panel.
云面板中的开放研究问题。
{"title":"Open Research Problems in the Cloud","authors":"R. Ramakrishnan, J. Rexford, Hakim Weatherspoon, S. Peter, F. Ozcan, Mehul Shah","doi":"10.1145/3472883.3517124","DOIUrl":"https://doi.org/10.1145/3472883.3517124","url":null,"abstract":"Open Research Problems in the Cloud Panel.","PeriodicalId":91949,"journal":{"name":"Proceedings of the ... ACM Symposium on Cloud Computing [electronic resource] : SOCC ... ... SoCC (Conference)","volume":"60 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2021-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73718797","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Cloud-Scale Runtime Verification of Serverless Applications 无服务器应用程序的云规模运行时验证
Kalev Alpernas, Aurojit Panda, L. Ryzhyk, Shmuel Sagiv
Serverless platforms aim to simplify the deployment, scaling, and management of cloud applications. Serverless applications are inherently distributed, and are executed using shortlived ephemeral processes. The use of short-lived ephemeral processes simplifies application scaling and management, but also means that existing approaches to monitoring distributed systems and detecting bugs cannot be applied to serverless applications. In this paper we propose Watchtower, a framework that enables runtime monitoring of serverless applications. Watchtower takes program properties as inputs, and can detect cases where applications violate these properties. We design Watchtower to minimize application changes, and to scale at the same rate as the application. We achieve the former by instrumenting libraries rather than application code, and the latter by structuring Watchtower as a serverless application. Once a bug is found, developers can use the Watchtower debugger to identify and address the root cause of the bug.
无服务器平台旨在简化云应用程序的部署、扩展和管理。无服务器应用程序本质上是分布式的,并且使用短暂的临时进程执行。使用短暂的临时流程简化了应用程序的扩展和管理,但也意味着现有的监视分布式系统和检测错误的方法不能应用于无服务器应用程序。在本文中,我们提出了Watchtower,这是一个支持无服务器应用程序运行时监控的框架。Watchtower将程序属性作为输入,并且可以检测应用程序违反这些属性的情况。我们设计Watchtower是为了尽量减少应用程序的更改,并以与应用程序相同的速率进行扩展。我们通过插装库而不是应用程序代码来实现前者,通过将Watchtower构建为无服务器应用程序来实现后者。一旦发现bug,开发人员就可以使用Watchtower调试器来识别并解决bug的根本原因。
{"title":"Cloud-Scale Runtime Verification of Serverless Applications","authors":"Kalev Alpernas, Aurojit Panda, L. Ryzhyk, Shmuel Sagiv","doi":"10.1145/3472883.3486977","DOIUrl":"https://doi.org/10.1145/3472883.3486977","url":null,"abstract":"Serverless platforms aim to simplify the deployment, scaling, and management of cloud applications. Serverless applications are inherently distributed, and are executed using shortlived ephemeral processes. The use of short-lived ephemeral processes simplifies application scaling and management, but also means that existing approaches to monitoring distributed systems and detecting bugs cannot be applied to serverless applications. In this paper we propose Watchtower, a framework that enables runtime monitoring of serverless applications. Watchtower takes program properties as inputs, and can detect cases where applications violate these properties. We design Watchtower to minimize application changes, and to scale at the same rate as the application. We achieve the former by instrumenting libraries rather than application code, and the latter by structuring Watchtower as a serverless application. Once a bug is found, developers can use the Watchtower debugger to identify and address the root cause of the bug.","PeriodicalId":91949,"journal":{"name":"Proceedings of the ... ACM Symposium on Cloud Computing [electronic resource] : SOCC ... ... SoCC (Conference)","volume":"10 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2021-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74282458","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
期刊
Proceedings of the ... ACM Symposium on Cloud Computing [electronic resource] : SOCC ... ... SoCC (Conference)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1