首页 > 最新文献

IEEE Cloud Computing最新文献

英文 中文
Compliance-as-Code for Cybersecurity Automation in Hybrid Cloud 混合云中网络安全自动化的合规性即代码
Q1 Computer Science Pub Date : 2022-07-01 DOI: 10.1109/CLOUD55607.2022.00066
Vikas Agarwal, Chris Butler, Lou Degenaro, Arun Kumar, A. Sailer, Gosia Steinder
Automation of cybersecurity processes has become crucial with large scale deployment of sensitive workloads in regulated on-prem, private, and public cloud environments. Regulatory and standards bodies such as Payment Card Industry (PCI), Federal Financial Institutions Examination Council (FFIEC), International Organization for Standardization (ISO), and others govern the minimal set of cybersecurity controls that an organization must implement. To meet such requirements while maintaining business agility, organizations need to modernize from manual document based compliance management to automated processes for continuous compliance. This modernized process is called compliance-as-code. In this paper, we present an architecture for compliance-as-code based on a standardized framework. We identify several design choices and our rationale behind those. Specifically, we introduce a system for manipulating compliance information in a standardized manner and a data interchange protocol for inter-operable communication of compliance information. We demonstrate the scalability of our approach and briefly describe deployment and experimental results in real world settings.
随着在受监管的本地、私有和公共云环境中大规模部署敏感工作负载,网络安全流程的自动化变得至关重要。诸如支付卡行业(PCI)、联邦金融机构检查委员会(FFIEC)、国际标准化组织(ISO)等监管和标准机构管理着组织必须实现的最小网络安全控制集。为了在保持业务敏捷性的同时满足这些需求,组织需要从基于手动文档的法规遵循管理现代化到实现持续法规遵循的自动化流程。这个现代化的过程被称为遵从即代码。在本文中,我们提出了一个基于标准化框架的法规遵循即代码的体系结构。我们确定了几种设计选择及其背后的基本原理。具体来说,我们介绍了一个以标准化方式操作法规遵循信息的系统和一个用于法规遵循信息互操作通信的数据交换协议。我们演示了我们的方法的可扩展性,并简要描述了实际环境中的部署和实验结果。
{"title":"Compliance-as-Code for Cybersecurity Automation in Hybrid Cloud","authors":"Vikas Agarwal, Chris Butler, Lou Degenaro, Arun Kumar, A. Sailer, Gosia Steinder","doi":"10.1109/CLOUD55607.2022.00066","DOIUrl":"https://doi.org/10.1109/CLOUD55607.2022.00066","url":null,"abstract":"Automation of cybersecurity processes has become crucial with large scale deployment of sensitive workloads in regulated on-prem, private, and public cloud environments. Regulatory and standards bodies such as Payment Card Industry (PCI), Federal Financial Institutions Examination Council (FFIEC), International Organization for Standardization (ISO), and others govern the minimal set of cybersecurity controls that an organization must implement. To meet such requirements while maintaining business agility, organizations need to modernize from manual document based compliance management to automated processes for continuous compliance. This modernized process is called compliance-as-code. In this paper, we present an architecture for compliance-as-code based on a standardized framework. We identify several design choices and our rationale behind those. Specifically, we introduce a system for manipulating compliance information in a standardized manner and a data interchange protocol for inter-operable communication of compliance information. We demonstrate the scalability of our approach and briefly describe deployment and experimental results in real world settings.","PeriodicalId":54281,"journal":{"name":"IEEE Cloud Computing","volume":"57 57 1","pages":"427-437"},"PeriodicalIF":0.0,"publicationDate":"2022-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79830244","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Trimmer: Cost-Efficient Deep Learning Auto-tuning for Cloud Datacenters Trimmer:经济高效的云数据中心深度学习自动调优
Q1 Computer Science Pub Date : 2022-07-01 DOI: 10.1109/CLOUD55607.2022.00061
Damian Borowiec, G. Yeung, A. Friday, Richard Harper, P. Garraghan
Cloud datacenters capable of provisioning high performance Machine Learning-as-a-Service (MLaaS) at reduced resource cost is achieved via auto-tuning: automated tensor program optimization of Deep Learning models to minimize inference latency within a hardware device. However given the extensive heterogeneity of Deep Learning models, libraries, and hardware devices, performing auto-tuning within Cloud datacenters incurs a significant time, compute resource, and energy cost of which state-of-the-art auto-tuning is not designed to mitigate. In this paper we propose Trimmer, a high performance and cost-efficient Deep Learning auto-tuning framework for Cloud datacenters. Trimmer maximizes DL model performance and tensor program cost-efficiency by preempting tensor program implementations exhibiting poor optimization improvement; and applying an ML-based filtering method to replace expensive low performing tensor programs to provide greater likelihood of selecting low latency tensor programs. Through an empirical study exploring the cost of DL model optimization techniques, our analysis indicates that 26–43% of total energy is expended on measuring tensor program implementations that do not positively contribute towards auto-tuning. Experiment results show that Trimmer achieves high auto-tuning cost-efficiency across different DL models, and reduces auto-tuning energy use by 21.8–40.9% for Cloud clusters whilst achieving DL model latency equivalent to state-of-the-art techniques.
能够以更低的资源成本提供高性能机器学习即服务(MLaaS)的云数据中心是通过自动调优实现的:深度学习模型的自动张量程序优化,以最大限度地减少硬件设备内的推理延迟。然而,考虑到深度学习模型、库和硬件设备的广泛异质性,在云数据中心内执行自动调优会产生大量的时间、计算资源和能源成本,而最先进的自动调优并不能减轻这些成本。在本文中,我们提出了Trimmer,一个高性能和经济高效的云数据中心深度学习自动调优框架。Trimmer通过抢占表现出较差优化改进的张量程序实现最大化DL模型性能和张量程序成本效率;并应用基于ml的过滤方法来替换昂贵的低性能张量程序,以提供更大的选择低延迟张量程序的可能性。通过一项探索深度学习模型优化技术成本的实证研究,我们的分析表明,总能量的26-43%花费在测量对自动调谐没有积极贡献的张量程序实现上。实验结果表明,Trimmer在不同的深度学习模型中实现了很高的自动调优成本效益,并将云集群的自动调优能耗降低了21.8-40.9%,同时实现了与最先进技术相当的深度学习模型延迟。
{"title":"Trimmer: Cost-Efficient Deep Learning Auto-tuning for Cloud Datacenters","authors":"Damian Borowiec, G. Yeung, A. Friday, Richard Harper, P. Garraghan","doi":"10.1109/CLOUD55607.2022.00061","DOIUrl":"https://doi.org/10.1109/CLOUD55607.2022.00061","url":null,"abstract":"Cloud datacenters capable of provisioning high performance Machine Learning-as-a-Service (MLaaS) at reduced resource cost is achieved via auto-tuning: automated tensor program optimization of Deep Learning models to minimize inference latency within a hardware device. However given the extensive heterogeneity of Deep Learning models, libraries, and hardware devices, performing auto-tuning within Cloud datacenters incurs a significant time, compute resource, and energy cost of which state-of-the-art auto-tuning is not designed to mitigate. In this paper we propose Trimmer, a high performance and cost-efficient Deep Learning auto-tuning framework for Cloud datacenters. Trimmer maximizes DL model performance and tensor program cost-efficiency by preempting tensor program implementations exhibiting poor optimization improvement; and applying an ML-based filtering method to replace expensive low performing tensor programs to provide greater likelihood of selecting low latency tensor programs. Through an empirical study exploring the cost of DL model optimization techniques, our analysis indicates that 26–43% of total energy is expended on measuring tensor program implementations that do not positively contribute towards auto-tuning. Experiment results show that Trimmer achieves high auto-tuning cost-efficiency across different DL models, and reduces auto-tuning energy use by 21.8–40.9% for Cloud clusters whilst achieving DL model latency equivalent to state-of-the-art techniques.","PeriodicalId":54281,"journal":{"name":"IEEE Cloud Computing","volume":"84 1","pages":"374-384"},"PeriodicalIF":0.0,"publicationDate":"2022-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85834221","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
CLOUD 2022 Reviewers CLOUD 2022审稿人
Q1 Computer Science Pub Date : 2022-07-01 DOI: 10.1109/cloud55607.2022.00013
{"title":"CLOUD 2022 Reviewers","authors":"","doi":"10.1109/cloud55607.2022.00013","DOIUrl":"https://doi.org/10.1109/cloud55607.2022.00013","url":null,"abstract":"","PeriodicalId":54281,"journal":{"name":"IEEE Cloud Computing","volume":"218 2 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86246772","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Konveyor Move2Kube: A Framework For Automated Application Replatforming Konveyor Move2Kube:一个用于自动应用程序重新平台的框架
Q1 Computer Science Pub Date : 2022-07-01 DOI: 10.1109/CLOUD55607.2022.00031
P. V. Seshadri, Harikrishnan Balagopal, Akash Nayak, Ashok Pon Kumar, Pablo Loyola
We present Move2Kube, a replatforming framework that automates the creation and transformation of DevOps artifacts of an application for deployment in a Cloud Native environment. Our contributions include a customizable transformer framework that allows for complete control over the artifacts being processed, and output generated. We provide case studies and open-source benchmark-based evidence comparing Move2Kube with similar state-of-the-art tools to demonstrate its effectiveness in terms of effort reduction, diverse utility, and highlight future lines of work. Move2Kube is being developed as an open-source community project and it is available at: https://move2kube.konveyor.io/
我们介绍Move2Kube,这是一个重新平台化的框架,可以自动创建和转换应用程序的DevOps构件,以便在云原生环境中部署。我们的贡献包括一个可定制的转换器框架,它允许完全控制正在处理的工件和生成的输出。我们提供了案例研究和基于开源基准的证据,将Move2Kube与类似的最先进的工具进行比较,以证明其在减少工作量、多样化实用程序和突出未来工作方面的有效性。Move2Kube是作为一个开源社区项目开发的,它可以在https://move2kube.konveyor.io/上获得
{"title":"Konveyor Move2Kube: A Framework For Automated Application Replatforming","authors":"P. V. Seshadri, Harikrishnan Balagopal, Akash Nayak, Ashok Pon Kumar, Pablo Loyola","doi":"10.1109/CLOUD55607.2022.00031","DOIUrl":"https://doi.org/10.1109/CLOUD55607.2022.00031","url":null,"abstract":"We present Move2Kube, a replatforming framework that automates the creation and transformation of DevOps artifacts of an application for deployment in a Cloud Native environment. Our contributions include a customizable transformer framework that allows for complete control over the artifacts being processed, and output generated. We provide case studies and open-source benchmark-based evidence comparing Move2Kube with similar state-of-the-art tools to demonstrate its effectiveness in terms of effort reduction, diverse utility, and highlight future lines of work. Move2Kube is being developed as an open-source community project and it is available at: https://move2kube.konveyor.io/","PeriodicalId":54281,"journal":{"name":"IEEE Cloud Computing","volume":"36 1","pages":"115-124"},"PeriodicalIF":0.0,"publicationDate":"2022-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82069107","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
TIFF: Tokenized Incentive for Federated Learning TIFF:联邦学习的标记化激励
Q1 Computer Science Pub Date : 2022-07-01 DOI: 10.1109/CLOUD55607.2022.00064
Jingoo Han, Ahmad Faraz Khan, Syed Zawad, Ali Anwar, Nathalie Baracaldo Angel, Yi Zhou, Feng Yan, A. Butt
In federated learning (FL), clients collectively train a global machine learning model with their own local data. Without sharing sensitive raw data, each client in FL only sends updated weights to consider privacy and security concerns. Most of existing FL works focus mainly on improving model accuracy and training time, but only a few works focus on FL incentive mechanisms. To build a high performance model after FL training, clients need to provide high quality and large amounts of data. However, in real FL scenarios, high-quality clients are reluctant to participate in FL process without reasonable compensation, because clients are self-interested and other clients can be business competitors. Even participation incurs some cost for contributing to the FL model with their local dataset. To address this problem, we propose TIFF, a novel tokenized incentive mechanism, where tokens are used as a means of paying for the services of providing participants and the training infrastructure. Without payment delays, participation can be monetized as both providers and consumers, which promotes continued long-term participation of high-quality data parties. Additionally, paid tokens are reimbursed to each client as consumers according to our newly proposed metrics (such as token reduction ratio and utility improvement ratio), which keeps clients engaged in FL process as consumers. To measure data quality, accuracy is calculated in training without additional overheads. We leverage historical accuracy records and random exploration to select high-utility participants and to prevent overfitting. Results show that TIFF provides more tokens to normal providers by up to 6.9% and less tokens to malicious providers by up to 18.1%, achieving improvement of the final model accuracy by up to 7.4%, compared to the default approach.
在联邦学习(FL)中,客户端使用自己的本地数据共同训练全局机器学习模型。在不共享敏感原始数据的情况下,FL中的每个客户端只发送更新的权重,以考虑隐私和安全问题。现有的人工智能研究大多集中在提高模型精度和训练时间上,而对人工智能激励机制的研究较少。为了在FL培训后建立一个高性能的模型,客户需要提供高质量和大量的数据。然而,在真实的FL场景中,如果没有合理的补偿,高质量的客户是不愿意参与FL过程的,因为客户是自利的,其他客户可能是业务竞争对手。即使是参与,也会因为使用本地数据集为FL模型做出贡献而产生一些成本。为了解决这个问题,我们提出了TIFF,这是一种新的代币化激励机制,其中代币被用作支付提供参与者和培训基础设施的服务的手段。在没有支付延迟的情况下,作为提供商和消费者的参与都可以货币化,这促进了高质量数据各方的持续长期参与。此外,根据我们新提出的指标(如代币减少率和效用改进率),支付的代币作为消费者补偿给每个客户端,这使客户端作为消费者参与FL流程。为了测量数据质量,准确度是在训练中计算的,没有额外的开销。我们利用历史准确性记录和随机探索来选择高效用参与者并防止过拟合。结果表明,与默认方法相比,TIFF向正常提供者提供的令牌最多可达6.9%,向恶意提供者提供的令牌最多可达18.1%,最终模型精度提高了7.4%。
{"title":"TIFF: Tokenized Incentive for Federated Learning","authors":"Jingoo Han, Ahmad Faraz Khan, Syed Zawad, Ali Anwar, Nathalie Baracaldo Angel, Yi Zhou, Feng Yan, A. Butt","doi":"10.1109/CLOUD55607.2022.00064","DOIUrl":"https://doi.org/10.1109/CLOUD55607.2022.00064","url":null,"abstract":"In federated learning (FL), clients collectively train a global machine learning model with their own local data. Without sharing sensitive raw data, each client in FL only sends updated weights to consider privacy and security concerns. Most of existing FL works focus mainly on improving model accuracy and training time, but only a few works focus on FL incentive mechanisms. To build a high performance model after FL training, clients need to provide high quality and large amounts of data. However, in real FL scenarios, high-quality clients are reluctant to participate in FL process without reasonable compensation, because clients are self-interested and other clients can be business competitors. Even participation incurs some cost for contributing to the FL model with their local dataset. To address this problem, we propose TIFF, a novel tokenized incentive mechanism, where tokens are used as a means of paying for the services of providing participants and the training infrastructure. Without payment delays, participation can be monetized as both providers and consumers, which promotes continued long-term participation of high-quality data parties. Additionally, paid tokens are reimbursed to each client as consumers according to our newly proposed metrics (such as token reduction ratio and utility improvement ratio), which keeps clients engaged in FL process as consumers. To measure data quality, accuracy is calculated in training without additional overheads. We leverage historical accuracy records and random exploration to select high-utility participants and to prevent overfitting. Results show that TIFF provides more tokens to normal providers by up to 6.9% and less tokens to malicious providers by up to 18.1%, achieving improvement of the final model accuracy by up to 7.4%, compared to the default approach.","PeriodicalId":54281,"journal":{"name":"IEEE Cloud Computing","volume":"103 1","pages":"407-416"},"PeriodicalIF":0.0,"publicationDate":"2022-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80649205","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Event-Driven Approach for Monitoring and Orchestration of Cloud and Edge-Enabled IoT Systems 用于云和边缘启用物联网系统监控和编排的事件驱动方法
Q1 Computer Science Pub Date : 2022-07-01 DOI: 10.1109/CLOUD55607.2022.00049
Mohamed Mouine, M. Saied
The Internet of Things (IoT) has greatly benefited the technological advances of a variety of fields, such as manufacturing and medicine, to name a few. The context surrounding these use cases is, however, often widely different from conventional Cloud Computing and web applications. Cyberphysical environments present us with major concerns and constraints surrounding the resilience of systems, which often rely on critical infrastructure and important workloads to prevent major losses for businesses or even the endangerment of individuals. The supervision of these infrastructures, outside the controlled and relatively safe environment of a datacenter, is therefore one of the major considerations for modern IoT systems. In this paper, we evaluate the core concepts around this thesis and propose an architectural and conceptual approach to improve the monitoring, scalability, and orchestration of IoT systems. We leverage and integrate different solutions inspired by modern IoT practices and the cloud ecosystem to optimize both software and hardware aspects. The solution revolves around an Edge Computing approach, Event-driven communication (MQTT) in the Edge, the orchestration of containerized services using Ku-bernetes and KubeEdge, and Device Twins for the management of physical components. Through development, experiment, and evaluation, we propose an architecture and two complementary fault-tolerance strategies to address synchronization between cloud and edge components and improve the overall resilience of the system.
物联网(IoT)极大地促进了制造业和医药等多个领域的技术进步。然而,围绕这些用例的上下文通常与传统的云计算和web应用程序大不相同。网络物理环境向我们提出了围绕系统弹性的主要担忧和限制,这些系统通常依赖于关键基础设施和重要工作负载,以防止企业遭受重大损失,甚至危及个人。因此,在数据中心的受控和相对安全的环境之外,对这些基础设施的监督是现代物联网系统的主要考虑因素之一。在本文中,我们评估了围绕本文的核心概念,并提出了一种架构和概念方法来改进物联网系统的监控、可扩展性和编排。我们利用并整合受现代物联网实践和云生态系统启发的不同解决方案,以优化软件和硬件方面。该解决方案围绕边缘计算方法、边缘中的事件驱动通信(MQTT)、使用kubenetes和KubeEdge的容器化服务编排以及用于管理物理组件的Device Twins展开。通过开发、实验和评估,我们提出了一种架构和两种互补的容错策略,以解决云和边缘组件之间的同步问题,并提高系统的整体弹性。
{"title":"Event-Driven Approach for Monitoring and Orchestration of Cloud and Edge-Enabled IoT Systems","authors":"Mohamed Mouine, M. Saied","doi":"10.1109/CLOUD55607.2022.00049","DOIUrl":"https://doi.org/10.1109/CLOUD55607.2022.00049","url":null,"abstract":"The Internet of Things (IoT) has greatly benefited the technological advances of a variety of fields, such as manufacturing and medicine, to name a few. The context surrounding these use cases is, however, often widely different from conventional Cloud Computing and web applications. Cyberphysical environments present us with major concerns and constraints surrounding the resilience of systems, which often rely on critical infrastructure and important workloads to prevent major losses for businesses or even the endangerment of individuals. The supervision of these infrastructures, outside the controlled and relatively safe environment of a datacenter, is therefore one of the major considerations for modern IoT systems. In this paper, we evaluate the core concepts around this thesis and propose an architectural and conceptual approach to improve the monitoring, scalability, and orchestration of IoT systems. We leverage and integrate different solutions inspired by modern IoT practices and the cloud ecosystem to optimize both software and hardware aspects. The solution revolves around an Edge Computing approach, Event-driven communication (MQTT) in the Edge, the orchestration of containerized services using Ku-bernetes and KubeEdge, and Device Twins for the management of physical components. Through development, experiment, and evaluation, we propose an architecture and two complementary fault-tolerance strategies to address synchronization between cloud and edge components and improve the overall resilience of the system.","PeriodicalId":54281,"journal":{"name":"IEEE Cloud Computing","volume":"5 1","pages":"273-282"},"PeriodicalIF":0.0,"publicationDate":"2022-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72857767","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Bypass Container Overlay Networks with Transparent BPF-driven Socket Replacement 具有透明bpf驱动的套接字替换的旁路容器覆盖网络
Q1 Computer Science Pub Date : 2022-07-01 DOI: 10.1109/CLOUD55607.2022.00033
Sunyanan Choochotkaew, Tatsuhiro Chiba, Scott Trent, Marcelo Amaral
Containerization on the cloud offers several crucial benefits. However, these benefits are negated by the effects of virtual network stack and address encapsulation, especially for workloads that require intense communication. Socket replacement is a promising approach to breach this wall without changing the underlay infrastructure by replacing a nested network stack with a simple host network stack. Current state-of-the-art approaches perform this replacement by preloading the overridden socket library in a containerized process. However, the preloading approach requires user effort to modify the deploying manifests and a compromised security policy configuration of privileged containers to access the host namespace. This paper introduces a new replacement framework where a secured control plane agent performs the replacement by utilizing low-overhead BPF kernel tracing technology. As a result, containers can obtain host-native network performance and neither modification nor escalated privileges are required for user containers. Experiments on multiple benchmarks including iPerf, MPI, memslap, and GROMACS have been conducted to confirm efficacy.
云上的容器化提供了几个关键的好处。然而,这些好处被虚拟网络堆栈和地址封装的影响抵消了,特别是对于需要密集通信的工作负载。套接字替换是一种很有前途的方法,可以在不改变底层基础设施的情况下突破这堵墙,方法是用简单的主机网络堆栈替换嵌套网络堆栈。当前最先进的方法通过在容器化进程中预加载被覆盖的套接字库来执行这种替换。但是,预加载方法需要用户修改部署清单和特权容器的安全策略配置,以便访问主机名称空间。本文介绍了一种新的替换框架,该框架利用低开销的BPF内核跟踪技术,由一个安全的控制平面代理执行替换。因此,容器可以获得主机本机网络性能,并且不需要修改或升级用户容器的特权。在iPerf、MPI、memslap和GROMACS等多个基准测试上进行了实验,以证实其有效性。
{"title":"Bypass Container Overlay Networks with Transparent BPF-driven Socket Replacement","authors":"Sunyanan Choochotkaew, Tatsuhiro Chiba, Scott Trent, Marcelo Amaral","doi":"10.1109/CLOUD55607.2022.00033","DOIUrl":"https://doi.org/10.1109/CLOUD55607.2022.00033","url":null,"abstract":"Containerization on the cloud offers several crucial benefits. However, these benefits are negated by the effects of virtual network stack and address encapsulation, especially for workloads that require intense communication. Socket replacement is a promising approach to breach this wall without changing the underlay infrastructure by replacing a nested network stack with a simple host network stack. Current state-of-the-art approaches perform this replacement by preloading the overridden socket library in a containerized process. However, the preloading approach requires user effort to modify the deploying manifests and a compromised security policy configuration of privileged containers to access the host namespace. This paper introduces a new replacement framework where a secured control plane agent performs the replacement by utilizing low-overhead BPF kernel tracing technology. As a result, containers can obtain host-native network performance and neither modification nor escalated privileges are required for user containers. Experiments on multiple benchmarks including iPerf, MPI, memslap, and GROMACS have been conducted to confirm efficacy.","PeriodicalId":54281,"journal":{"name":"IEEE Cloud Computing","volume":"228 1","pages":"134-143"},"PeriodicalIF":0.0,"publicationDate":"2022-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72751651","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Network Aware Container Orchestration for Telco Workloads 电信工作负载的网络感知容器编排
Q1 Computer Science Pub Date : 2022-07-01 DOI: 10.1109/CLOUD55607.2022.00063
Kavya Govindarajan, Chander Govindarajan, Mudit Verma
In recent years, with the maturation of container orchestration platforms like Kubernetes, containers are now becoming the default way to deploy cloud-native applications, designed as microservices, on public and private clouds. These trends have also spread to the field of Telecommunications, boosted by the onset of 5G. Network functions processing millions of packets per second, earlier run as proprietary physical boxes, are now being realized as disaggregated container based microservices (CNFs) running on commodity clusters managed by orchestrators, like Kubernetes, on Telco clouds. While container orchestrators have evolved to meet the needs of enterprise applications, Telco workloads still remain a second class citizen, as the orchestrator is presently unaware of the networking needs of CNFs and cannot guarantee QoS of network intensive functions. In this work, we examine orchestration of network sensitive functions and identify the key networking requirements of containerized Telco workloads from the orchestration platform. We design and propose NACO - Network Aware Container Orchestration, a minimal, cloud-native and scalable extension to the Kubernetes platform to address these requirements and provide first class lifecycle management of CNFs used in Telco workloads. We implement a prototype of the system and demonstrate that we can achieve network aware container orchestration with minimal operation times.
近年来,随着Kubernetes等容器编排平台的成熟,容器现在成为在公共云和私有云上部署云原生应用程序的默认方式,这些应用程序被设计为微服务。在5G的推动下,这些趋势也蔓延到了电信领域。网络功能每秒处理数百万个数据包,以前作为专有的物理盒运行,现在作为基于分解容器的微服务(cnf)运行在由编排器(如Kubernetes)管理的商品集群上,在电信云上实现。虽然容器编排器已经发展到可以满足企业应用程序的需求,但电信工作负载仍然是二等公民,因为编排器目前还不知道cnf的网络需求,也不能保证网络密集型功能的QoS。在这项工作中,我们检查了网络敏感功能的编排,并确定了来自编排平台的容器化电信工作负载的关键网络需求。我们设计并提出了NACO——网络感知容器编排,这是Kubernetes平台的一个最小的、云原生的、可扩展的扩展,以满足这些需求,并为电信工作负载中使用的cnf提供一流的生命周期管理。我们实现了系统的原型,并演示了我们可以用最少的操作时间实现网络感知的容器编排。
{"title":"Network Aware Container Orchestration for Telco Workloads","authors":"Kavya Govindarajan, Chander Govindarajan, Mudit Verma","doi":"10.1109/CLOUD55607.2022.00063","DOIUrl":"https://doi.org/10.1109/CLOUD55607.2022.00063","url":null,"abstract":"In recent years, with the maturation of container orchestration platforms like Kubernetes, containers are now becoming the default way to deploy cloud-native applications, designed as microservices, on public and private clouds. These trends have also spread to the field of Telecommunications, boosted by the onset of 5G. Network functions processing millions of packets per second, earlier run as proprietary physical boxes, are now being realized as disaggregated container based microservices (CNFs) running on commodity clusters managed by orchestrators, like Kubernetes, on Telco clouds. While container orchestrators have evolved to meet the needs of enterprise applications, Telco workloads still remain a second class citizen, as the orchestrator is presently unaware of the networking needs of CNFs and cannot guarantee QoS of network intensive functions. In this work, we examine orchestration of network sensitive functions and identify the key networking requirements of containerized Telco workloads from the orchestration platform. We design and propose NACO - Network Aware Container Orchestration, a minimal, cloud-native and scalable extension to the Kubernetes platform to address these requirements and provide first class lifecycle management of CNFs used in Telco workloads. We implement a prototype of the system and demonstrate that we can achieve network aware container orchestration with minimal operation times.","PeriodicalId":54281,"journal":{"name":"IEEE Cloud Computing","volume":"2012 1","pages":"397-406"},"PeriodicalIF":0.0,"publicationDate":"2022-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87713690","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Secure Cloud Storage with Joint Deduplication and Erasure Protection 支持重复数据删除和Erasure联合保护的安全云存储
Q1 Computer Science Pub Date : 2022-07-01 DOI: 10.1109/CLOUD55607.2022.00078
Rasmus Vestergaard, Elena Pagnin, Rohon Kundu, D. Lucani
This work proposes a novel design for secure cloud storage systems using a third party to meet three seemingly opposing demands: reduce storage requirements on the cloud, protect against erasures (data loss), and maintain confidentiality of the data. More specifically, we achieve storage cost reductions using data deduplication without requiring system users to trust that the cloud operates honestly. We analyze the security of our scheme against honest-but-curious and covert adversaries that may collude with multiple parties and show that no novel sensitive information can be inferred, assuming random oracles and a high min-entropy data source. We also provide a mathematical analysis to characterize its potential for compression given the popularity of individual chunks of data and its overall erasure protection capabilities. In fact, we show that the storage cost of our scheme for a chunk with r replicas is O(log(r)/r), while deduplication without security or reliability considerations is O(1/r), i.e., our added cost for providing reliability and security is only O(log(r)). We provide a proof of concept implementation to simulate performance and verify our analytical results.
这项工作提出了一种使用第三方的安全云存储系统的新设计,以满足三个看似相反的需求:减少云上的存储需求,防止擦除(数据丢失),并保持数据的机密性。更具体地说,我们使用重复数据删除实现了存储成本的降低,而不需要系统用户相信云是诚实运行的。我们分析了我们的方案对诚实但好奇和隐蔽的对手的安全性,这些对手可能与多方勾结,并表明没有新的敏感信息可以推断,假设随机预言和高最小熵数据源。我们还提供了一个数学分析来描述它的压缩潜力,考虑到单个数据块的流行和它的整体擦除保护功能。事实上,我们表明,对于具有r个副本的块,我们的方案的存储成本是O(log(r)/r),而不考虑安全性或可靠性的重复数据删除是O(1/r),也就是说,我们提供可靠性和安全性的额外成本仅为O(log(r))。我们提供了一个概念验证实现来模拟性能并验证我们的分析结果。
{"title":"Secure Cloud Storage with Joint Deduplication and Erasure Protection","authors":"Rasmus Vestergaard, Elena Pagnin, Rohon Kundu, D. Lucani","doi":"10.1109/CLOUD55607.2022.00078","DOIUrl":"https://doi.org/10.1109/CLOUD55607.2022.00078","url":null,"abstract":"This work proposes a novel design for secure cloud storage systems using a third party to meet three seemingly opposing demands: reduce storage requirements on the cloud, protect against erasures (data loss), and maintain confidentiality of the data. More specifically, we achieve storage cost reductions using data deduplication without requiring system users to trust that the cloud operates honestly. We analyze the security of our scheme against honest-but-curious and covert adversaries that may collude with multiple parties and show that no novel sensitive information can be inferred, assuming random oracles and a high min-entropy data source. We also provide a mathematical analysis to characterize its potential for compression given the popularity of individual chunks of data and its overall erasure protection capabilities. In fact, we show that the storage cost of our scheme for a chunk with r replicas is O(log(r)/r), while deduplication without security or reliability considerations is O(1/r), i.e., our added cost for providing reliability and security is only O(log(r)). We provide a proof of concept implementation to simulate performance and verify our analytical results.","PeriodicalId":54281,"journal":{"name":"IEEE Cloud Computing","volume":"125 1","pages":"554-563"},"PeriodicalIF":0.0,"publicationDate":"2022-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79425400","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Data Access Pattern Recommendations for Microservices Architecture 微服务架构的数据访问模式建议
Q1 Computer Science Pub Date : 2022-07-01 DOI: 10.1109/CLOUD55607.2022.00044
D. Venkatesh, Shivali Agarwal
The choice of pattern of data access from database tables is critical for a microservice to maximize benefits of distributed architecture. Traditionally, microservices have been designed using shared table access pattern commonly referred to as CRUD pattern. More recently, there has been a growing interest in applying other patterns like CQRS. In this work, we propose a system that recommends the most suitable pattern for a microservice as per the separation in read and write operations in the transactions performed by the service.
选择从数据库表中访问数据的模式对于微服务最大化分布式体系结构的好处至关重要。传统上,微服务是使用共享表访问模式(通常称为CRUD模式)设计的。最近,人们对应用其他模式(如CQRS)越来越感兴趣。在这项工作中,我们提出了一个系统,根据服务执行的事务中读写操作的分离,为微服务推荐最合适的模式。
{"title":"Data Access Pattern Recommendations for Microservices Architecture","authors":"D. Venkatesh, Shivali Agarwal","doi":"10.1109/CLOUD55607.2022.00044","DOIUrl":"https://doi.org/10.1109/CLOUD55607.2022.00044","url":null,"abstract":"The choice of pattern of data access from database tables is critical for a microservice to maximize benefits of distributed architecture. Traditionally, microservices have been designed using shared table access pattern commonly referred to as CRUD pattern. More recently, there has been a growing interest in applying other patterns like CQRS. In this work, we propose a system that recommends the most suitable pattern for a microservice as per the separation in read and write operations in the transactions performed by the service.","PeriodicalId":54281,"journal":{"name":"IEEE Cloud Computing","volume":"29 1","pages":"241-243"},"PeriodicalIF":0.0,"publicationDate":"2022-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81988191","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
IEEE Cloud Computing
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1