首页 > 最新文献

IEEE Cloud Computing最新文献

英文 中文
Secure Offloading of User-level IDS with VM-compatible OS Emulation Layers for Intel SGX 安全卸载用户级IDS与vm兼容的操作系统仿真层为英特尔SGX
Q1 Computer Science Pub Date : 2022-07-01 DOI: 10.1109/CLOUD55607.2022.00035
Takumi Kawamura, Kenichi Kourai
Since virtual machines (VMs) provided by Infrastructure-as-a-Service clouds often suffer from attacks, they need to be monitored using intrusion detection systems (IDS). For secure execution of host-based IDS (HIDS), IDS offloading is used to run IDS outside target VMs, but offloaded IDS can still be attacked. To address this issue, secure IDS offloading using Intel SGX has been proposed. However, IDS development requires kernel-level programming, which is difficult for most IDS developers. This paper proposes SCwatcher for enabling user-level HIDS running on top of the operating system (OS) to be securely offloaded using VM-compatible OS emulation layers for SGX. SCwatcher provides the standard OS interface used in a target VM to in-enclave IDS. Especially, the virtual proc filesystem called vProcFS analyzes OS data using VM introspection and returns the system information inside the target VM. We have implemented SCwatcher using Xen supporting SGX virtualization and two types of OS emulation layers for SGX called SCONE and Occlum. Then, we confirmed that SCwatcher could offload legacy HIDS and showed that the performance could be comparable to insecure IDS offloading.
由于基础设施即服务云提供的虚拟机(vm)经常遭受攻击,因此需要使用入侵检测系统(IDS)对其进行监控。为了安全执行基于主机的IDS (host-based IDS), IDS卸载用于在目标虚拟机之外运行IDS,但卸载后的IDS仍然可能受到攻击。为了解决这个问题,已经提出了使用Intel SGX的安全IDS卸载。然而,IDS开发需要内核级编程,这对大多数IDS开发人员来说是困难的。本文提出了使用SGX的vm兼容的操作系统仿真层,使运行在操作系统(OS)之上的用户级HIDS能够安全地卸载的SCwatcher。SCwatcher提供了在目标VM中使用的标准操作系统接口来封装IDS。特别是,名为vProcFS的虚拟进程文件系统使用VM自省分析操作系统数据,并返回目标VM内的系统信息。我们使用支持SGX虚拟化的Xen实现了SCwatcher,并为SGX提供了两种类型的操作系统仿真层,称为SCONE和Occlum。然后,我们证实了SCwatcher可以卸载遗留的IDS,并表明性能可以与不安全的IDS卸载相媲美。
{"title":"Secure Offloading of User-level IDS with VM-compatible OS Emulation Layers for Intel SGX","authors":"Takumi Kawamura, Kenichi Kourai","doi":"10.1109/CLOUD55607.2022.00035","DOIUrl":"https://doi.org/10.1109/CLOUD55607.2022.00035","url":null,"abstract":"Since virtual machines (VMs) provided by Infrastructure-as-a-Service clouds often suffer from attacks, they need to be monitored using intrusion detection systems (IDS). For secure execution of host-based IDS (HIDS), IDS offloading is used to run IDS outside target VMs, but offloaded IDS can still be attacked. To address this issue, secure IDS offloading using Intel SGX has been proposed. However, IDS development requires kernel-level programming, which is difficult for most IDS developers. This paper proposes SCwatcher for enabling user-level HIDS running on top of the operating system (OS) to be securely offloaded using VM-compatible OS emulation layers for SGX. SCwatcher provides the standard OS interface used in a target VM to in-enclave IDS. Especially, the virtual proc filesystem called vProcFS analyzes OS data using VM introspection and returns the system information inside the target VM. We have implemented SCwatcher using Xen supporting SGX virtualization and two types of OS emulation layers for SGX called SCONE and Occlum. Then, we confirmed that SCwatcher could offload legacy HIDS and showed that the performance could be comparable to insecure IDS offloading.","PeriodicalId":54281,"journal":{"name":"IEEE Cloud Computing","volume":"197 1","pages":"157-166"},"PeriodicalIF":0.0,"publicationDate":"2022-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84460286","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
CLOUD 2022 Sub-Reviewers CLOUD 2022分审员
Q1 Computer Science Pub Date : 2022-07-01 DOI: 10.1109/cloud55607.2022.00014
{"title":"CLOUD 2022 Sub-Reviewers","authors":"","doi":"10.1109/cloud55607.2022.00014","DOIUrl":"https://doi.org/10.1109/cloud55607.2022.00014","url":null,"abstract":"","PeriodicalId":54281,"journal":{"name":"IEEE Cloud Computing","volume":"7 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85459725","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multi-Objective Robust Workflow Offloading in Edge-to-Cloud Continuum 边缘到云连续体的多目标鲁棒工作流卸载
Q1 Computer Science Pub Date : 2022-07-01 DOI: 10.1109/CLOUD55607.2022.00070
Hongyun Liu, Ruyue Xin, Peng Chen, Zhiming Zhao
Workflow offloading in the edge-to-cloud continuum copes with an extended calculation network among edge devices and cloud platforms. With the growing significance of edge and cloud technologies, workflow offloading among these environments has been investigated in recent years. However, the dynamics of offloading optimization objectives, i.e., latency, resource utilization rate, and energy consumption among the edge and cloud sides, have hardly been researched. Consequently, the Quality of Service(QoS) and offloading performance also experience uncertain deviation. In this work, we propose a multi-objective robust offloading algorithm to address this issue, dealing with dynamics and multi-objective optimization. The workflow request model in this work is modeled as Directed Acyclic Graph(DAG). An LSTM-based sequence-to-sequence neural network learns the offloading policy. We then conduct comprehensive implementations to validate the robustness of our algorithm. As a result, our algorithm achieves better offloading performance regarding each objective and faster adaptation to newly changed environments than fine-tuned typical single-objective RL-based offloading methods.
在边缘到云连续体中的工作流卸载处理边缘设备和云平台之间的扩展计算网络。随着边缘技术和云技术的日益重要,近年来人们对这些环境中的工作流卸载进行了研究。然而,边缘端和云端的卸载优化目标,即延迟、资源利用率和能耗的动态变化却鲜有研究。因此,服务质量(QoS)和卸载性能也会出现不确定的偏差。在这项工作中,我们提出了一个多目标鲁棒卸载算法来解决这个问题,处理动态和多目标优化。本工作中的工作流请求模型采用有向无环图(DAG)建模。基于lstm的序列到序列神经网络学习卸载策略。然后,我们进行全面的实现来验证我们的算法的鲁棒性。结果表明,与经过微调的典型单目标rl卸载方法相比,我们的算法在每个目标上都实现了更好的卸载性能,并且对新变化的环境适应速度更快。
{"title":"Multi-Objective Robust Workflow Offloading in Edge-to-Cloud Continuum","authors":"Hongyun Liu, Ruyue Xin, Peng Chen, Zhiming Zhao","doi":"10.1109/CLOUD55607.2022.00070","DOIUrl":"https://doi.org/10.1109/CLOUD55607.2022.00070","url":null,"abstract":"Workflow offloading in the edge-to-cloud continuum copes with an extended calculation network among edge devices and cloud platforms. With the growing significance of edge and cloud technologies, workflow offloading among these environments has been investigated in recent years. However, the dynamics of offloading optimization objectives, i.e., latency, resource utilization rate, and energy consumption among the edge and cloud sides, have hardly been researched. Consequently, the Quality of Service(QoS) and offloading performance also experience uncertain deviation. In this work, we propose a multi-objective robust offloading algorithm to address this issue, dealing with dynamics and multi-objective optimization. The workflow request model in this work is modeled as Directed Acyclic Graph(DAG). An LSTM-based sequence-to-sequence neural network learns the offloading policy. We then conduct comprehensive implementations to validate the robustness of our algorithm. As a result, our algorithm achieves better offloading performance regarding each objective and faster adaptation to newly changed environments than fine-tuned typical single-objective RL-based offloading methods.","PeriodicalId":54281,"journal":{"name":"IEEE Cloud Computing","volume":"20 1","pages":"469-478"},"PeriodicalIF":0.0,"publicationDate":"2022-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85774346","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
CLOUD 2022 Organizing Committee 云2022组委会
Q1 Computer Science Pub Date : 2022-07-01 DOI: 10.1109/cloud55607.2022.00012
{"title":"CLOUD 2022 Organizing Committee","authors":"","doi":"10.1109/cloud55607.2022.00012","DOIUrl":"https://doi.org/10.1109/cloud55607.2022.00012","url":null,"abstract":"","PeriodicalId":54281,"journal":{"name":"IEEE Cloud Computing","volume":"120 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74571675","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Avengers, Assemble! Survey of WebAssembly Security Solutions 复仇者集合!WebAssembly安全解决方案调查
Q1 Computer Science Pub Date : 2022-07-01 DOI: 10.1109/CLOUD55607.2022.00077
Minseo Kim, Hyerean Jang, Young-joo Shin
WebAssembly, abbreviated as Wasm, has emerged as a new paradigm in cloud-native developments owing to its promising properties. Native execution speed and fast startup time make Wasm an alternative for container-based cloud applications. Despite its security-by-design strategy, however, WebAssembly suffers from a variety of vulnerabilities and weaknesses, which hinder its rapid adoption in cloud computing. For instance, the native execution performance attracted cybercriminals to abuse Wasm binaries for the purpose of resource stealing such as cryptojacking. Without proper defense mechanisms, Wasm-based malware would proliferate, causing huge financial loss of cloud users. Moreover, the design principle that allows type-unsafe languages such as C/C++ inherently induces various memory bugs in an Wasm binary. Efficient and robust vulnerability analysis techniques are necessary to protect benign cloud-native Wasm applications from being exploited by attackers. Due to the young age of WebAssembly, however, there are few works in the literature that provide developers guidance to such security techniques. This makes developers to hesitate considering Wasm as their cloud-native platform. In this paper, we surveyed various techniques and methods for Wasm binary security proposed in the literature and systematically classified them according to certain criteria. As a result, we propose future research directions regarding the current lack of WebAssembly binary security research.
WebAssembly,缩写为Wasm,由于其有前途的特性,已经成为云原生开发的新范例。本机执行速度和快速启动时间使Wasm成为基于容器的云应用程序的替代方案。然而,尽管WebAssembly采用了基于设计的安全策略,但它仍然存在各种各样的漏洞和弱点,这阻碍了它在云计算中的快速采用。例如,本机执行性能吸引网络犯罪分子滥用Wasm二进制文件,以窃取资源,如加密劫持。如果没有适当的防御机制,基于wasm的恶意软件就会扩散,给云用户造成巨大的经济损失。此外,允许类型不安全语言(如C/ c++)的设计原则固有地会在Wasm二进制文件中引起各种内存错误。有效和健壮的漏洞分析技术对于保护良性的云原生Wasm应用程序不被攻击者利用是必要的。然而,由于WebAssembly还很年轻,在文献中很少有作品为开发人员提供此类安全技术的指导。这使得开发人员在考虑将Wasm作为他们的云原生平台时犹豫不决。本文对文献中提出的各种Wasm二进制安全技术和方法进行了综述,并按照一定的标准对其进行了系统的分类。因此,针对目前WebAssembly二进制安全研究的不足,提出了未来的研究方向。
{"title":"Avengers, Assemble! Survey of WebAssembly Security Solutions","authors":"Minseo Kim, Hyerean Jang, Young-joo Shin","doi":"10.1109/CLOUD55607.2022.00077","DOIUrl":"https://doi.org/10.1109/CLOUD55607.2022.00077","url":null,"abstract":"WebAssembly, abbreviated as Wasm, has emerged as a new paradigm in cloud-native developments owing to its promising properties. Native execution speed and fast startup time make Wasm an alternative for container-based cloud applications. Despite its security-by-design strategy, however, WebAssembly suffers from a variety of vulnerabilities and weaknesses, which hinder its rapid adoption in cloud computing. For instance, the native execution performance attracted cybercriminals to abuse Wasm binaries for the purpose of resource stealing such as cryptojacking. Without proper defense mechanisms, Wasm-based malware would proliferate, causing huge financial loss of cloud users. Moreover, the design principle that allows type-unsafe languages such as C/C++ inherently induces various memory bugs in an Wasm binary. Efficient and robust vulnerability analysis techniques are necessary to protect benign cloud-native Wasm applications from being exploited by attackers. Due to the young age of WebAssembly, however, there are few works in the literature that provide developers guidance to such security techniques. This makes developers to hesitate considering Wasm as their cloud-native platform. In this paper, we surveyed various techniques and methods for Wasm binary security proposed in the literature and systematically classified them according to certain criteria. As a result, we propose future research directions regarding the current lack of WebAssembly binary security research.","PeriodicalId":54281,"journal":{"name":"IEEE Cloud Computing","volume":"28 1","pages":"543-553"},"PeriodicalIF":0.0,"publicationDate":"2022-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80697945","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Smart Edge Power Management to Improve Availability and Cost-efficiency of Edge Cloud 智能边缘电源管理,提高边缘云的可用性和成本效率
Q1 Computer Science Pub Date : 2022-07-01 DOI: 10.1109/CLOUD55607.2022.00032
Amardeep Mehta, Lackis Eleftheriadis
With the increased 5G deployments across the world, new use-cases are emerging in many new domains, such as autonomous vehicles, smart cities, smart grid and potentially the proliferation of augmented reality. Some of these applications require high availability, bandwidth and/or extremely low latency that depends on the applied service.Currently the cost of deployment of distributed edge nodes and its relation to the availability of power infrastructure is not well known. In this work, we demystify the cost of edge resources by proposing a cost estimation framework by considering the various existing edge related constraints, such as power grid and edge power node infrastructure. We consider Capital Expenditure (CAPEX) and Operational Expenditure (OPEX) as well as time value of money in relation to the Hardware (HW) redundancy and depreciation for edge cloud resource estimation. The cost of resources are made in relation to the local edge power infrastructure conditions for the applied services and required Service Level Agreement (SLA). The availability of application is estimated using Reliability Block Diagram (RBD) of the edge components including power and cooling systems.We propose a new method, called Smart Edge Power Management (SEPM), that includes identification of the relevant parameters and states of the edge power infrastructure and how to overcome the various edge power related constraints and to further improve the cost efficiency during operation. The performance and evaluation are made on country wide edge deployments for a mobile operator in Sweden. With our new proposed method SEPM, the cost efficiency of edge resources can be improved upto 10%.
随着全球5G部署的增加,许多新领域都出现了新的用例,例如自动驾驶汽车、智能城市、智能电网以及潜在的增强现实技术的普及。其中一些应用程序需要高可用性、带宽和/或极低的延迟,这取决于所应用的服务。目前,分布式边缘节点的部署成本及其与电力基础设施可用性的关系尚不清楚。在这项工作中,我们提出了一个考虑各种现有边缘相关约束(如电网和边缘电源节点基础设施)的成本估算框架,从而揭开了边缘资源成本的神秘面纱。我们考虑资本支出(CAPEX)和运营支出(OPEX)以及与边缘云资源估计的硬件(HW)冗余和折旧相关的金钱的时间价值。资源成本与应用服务的本地边缘电力基础设施条件和所需的服务水平协议(SLA)相关。使用包括电源和冷却系统在内的边缘组件的可靠性框图(RBD)来估计应用的可用性。我们提出了一种新的方法,称为智能边缘电源管理(SEPM),其中包括识别边缘电源基础设施的相关参数和状态,以及如何克服各种边缘电源相关限制,并进一步提高运行过程中的成本效率。在瑞典一家移动运营商的全国边缘部署中进行了性能和评估。采用本文提出的SEPM方法,边缘资源的成本效率可提高10%。
{"title":"Smart Edge Power Management to Improve Availability and Cost-efficiency of Edge Cloud","authors":"Amardeep Mehta, Lackis Eleftheriadis","doi":"10.1109/CLOUD55607.2022.00032","DOIUrl":"https://doi.org/10.1109/CLOUD55607.2022.00032","url":null,"abstract":"With the increased 5G deployments across the world, new use-cases are emerging in many new domains, such as autonomous vehicles, smart cities, smart grid and potentially the proliferation of augmented reality. Some of these applications require high availability, bandwidth and/or extremely low latency that depends on the applied service.Currently the cost of deployment of distributed edge nodes and its relation to the availability of power infrastructure is not well known. In this work, we demystify the cost of edge resources by proposing a cost estimation framework by considering the various existing edge related constraints, such as power grid and edge power node infrastructure. We consider Capital Expenditure (CAPEX) and Operational Expenditure (OPEX) as well as time value of money in relation to the Hardware (HW) redundancy and depreciation for edge cloud resource estimation. The cost of resources are made in relation to the local edge power infrastructure conditions for the applied services and required Service Level Agreement (SLA). The availability of application is estimated using Reliability Block Diagram (RBD) of the edge components including power and cooling systems.We propose a new method, called Smart Edge Power Management (SEPM), that includes identification of the relevant parameters and states of the edge power infrastructure and how to overcome the various edge power related constraints and to further improve the cost efficiency during operation. The performance and evaluation are made on country wide edge deployments for a mobile operator in Sweden. With our new proposed method SEPM, the cost efficiency of edge resources can be improved upto 10%.","PeriodicalId":54281,"journal":{"name":"IEEE Cloud Computing","volume":"49 1","pages":"125-133"},"PeriodicalIF":0.0,"publicationDate":"2022-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73541403","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Performance and Revenue Analysis of Hybrid Cloud Federations with QoS Requirements 具有QoS要求的混合云联盟的性能和收益分析
Q1 Computer Science Pub Date : 2022-07-01 DOI: 10.1109/CLOUD55607.2022.00055
Bo-wen Song, Marco Paolieri, L. Golubchik
Hybrid cloud architectures, where private clouds or data centers forward part of their workload to public cloud providers to satisfy quality of service (QoS) requirements, are increasingly common due to the availability of on-demand cloud resources that can be provisioned automatically through programming APIs. In this paper, we analyze performance and revenue in federations of hybrid clouds, where private clouds agree to share part of their local computing resources with other members of the federation. Through resource sharing, underprovisioned members can save on public cloud costs, while overprovisioned members can put their idle resources to work. To reward all hybrid clouds for their contributions (computing resources or workload), public cloud savings due to the federation are distributed among members according to Shapley value.We model this cloud architecture with a continuous-time Markov chain and prove that, if all hybrid clouds have the same QoS requirements, their profits are maximized when they join the federation and share all resources. We also show that this result does not hold when hybrid clouds have different QoS requirements, and we provide a solution to evaluate profit for different resource sharing decisions. Finally, our experimental evaluation compares the distribution of public cloud savings according to Shapley value with alternative approaches, illustrating its ability to discourage free riders of the federation.
混合云架构(私有云或数据中心将其部分工作负载转发给公共云提供商以满足服务质量(QoS)要求)越来越普遍,因为可以通过编程api自动提供按需云资源。在本文中,我们分析了混合云联盟中的性能和收益,其中私有云同意与联盟的其他成员共享其部分本地计算资源。通过资源共享,资源不足的成员可以节省公共云成本,而资源过剩的成员可以将闲置资源投入使用。为了奖励所有混合云的贡献(计算资源或工作负载),由于联盟的公共云节省根据Shapley值在成员之间分配。我们用连续时间马尔可夫链对这种云架构进行建模,并证明了如果所有混合云具有相同的QoS需求,那么当它们加入联邦并共享所有资源时,它们的利润是最大的。我们还表明,当混合云具有不同的QoS要求时,这一结果并不成立,并且我们提供了一个解决方案来评估不同资源共享决策的利润。最后,我们的实验评估根据Shapley值与替代方法比较了公共云节省的分布,说明了其阻止联盟搭便车的能力。
{"title":"Performance and Revenue Analysis of Hybrid Cloud Federations with QoS Requirements","authors":"Bo-wen Song, Marco Paolieri, L. Golubchik","doi":"10.1109/CLOUD55607.2022.00055","DOIUrl":"https://doi.org/10.1109/CLOUD55607.2022.00055","url":null,"abstract":"Hybrid cloud architectures, where private clouds or data centers forward part of their workload to public cloud providers to satisfy quality of service (QoS) requirements, are increasingly common due to the availability of on-demand cloud resources that can be provisioned automatically through programming APIs. In this paper, we analyze performance and revenue in federations of hybrid clouds, where private clouds agree to share part of their local computing resources with other members of the federation. Through resource sharing, underprovisioned members can save on public cloud costs, while overprovisioned members can put their idle resources to work. To reward all hybrid clouds for their contributions (computing resources or workload), public cloud savings due to the federation are distributed among members according to Shapley value.We model this cloud architecture with a continuous-time Markov chain and prove that, if all hybrid clouds have the same QoS requirements, their profits are maximized when they join the federation and share all resources. We also show that this result does not hold when hybrid clouds have different QoS requirements, and we provide a solution to evaluate profit for different resource sharing decisions. Finally, our experimental evaluation compares the distribution of public cloud savings according to Shapley value with alternative approaches, illustrating its ability to discourage free riders of the federation.","PeriodicalId":54281,"journal":{"name":"IEEE Cloud Computing","volume":"67 1","pages":"321-330"},"PeriodicalIF":0.0,"publicationDate":"2022-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86078250","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
OrcBench: A Representative Serverless Benchmark OrcBench:一个代表性的无服务器基准测试
Q1 Computer Science Pub Date : 2022-07-01 DOI: 10.1109/CLOUD55607.2022.00028
Ryan Hancock, Sreeharsha Udayashankar, A. Mashtizadeh, S. Al-Kiswany
Serverless computing is rapidly growing area of research. No standardized benchmark currently exists for evaluating orchestration level decisions or executing large serverless workloads because of the limited data provided by cloud providers. Current benchmarks focus on other aspects, such as the cost of running general types of functions and their runtimes.We introduce OrcBench, the first orchestration benchmark based on the recently published Microsoft Azure serverless data set. OrcBench categorizes 8622 serverless functions into 17 distinct models, which represent 5.6 million invocations from the original trace.OrcBench also incorporates a time-series analysis to identify function chains within the dataset. OrcBench can use these to create workloads that mimic complete serverless applications, which includes simulating CPU and memory usage. The modeling allows these workloads to be scaled according to the target hardware configuration.
无服务器计算是一个快速发展的研究领域。由于云提供商提供的数据有限,目前还没有用于评估编排级别决策或执行大型无服务器工作负载的标准化基准。当前的基准测试关注其他方面,例如运行一般类型的函数及其运行时的成本。我们介绍OrcBench,这是基于最近发布的Microsoft Azure无服务器数据集的第一个业务流程基准测试。OrcBench将8622个无服务器函数分为17个不同的模型,代表了原始跟踪中的560万次调用。OrcBench还结合了时间序列分析来识别数据集中的功能链。OrcBench可以使用这些来创建模拟完全无服务器应用程序的工作负载,包括模拟CPU和内存使用情况。建模允许根据目标硬件配置对这些工作负载进行缩放。
{"title":"OrcBench: A Representative Serverless Benchmark","authors":"Ryan Hancock, Sreeharsha Udayashankar, A. Mashtizadeh, S. Al-Kiswany","doi":"10.1109/CLOUD55607.2022.00028","DOIUrl":"https://doi.org/10.1109/CLOUD55607.2022.00028","url":null,"abstract":"Serverless computing is rapidly growing area of research. No standardized benchmark currently exists for evaluating orchestration level decisions or executing large serverless workloads because of the limited data provided by cloud providers. Current benchmarks focus on other aspects, such as the cost of running general types of functions and their runtimes.We introduce OrcBench, the first orchestration benchmark based on the recently published Microsoft Azure serverless data set. OrcBench categorizes 8622 serverless functions into 17 distinct models, which represent 5.6 million invocations from the original trace.OrcBench also incorporates a time-series analysis to identify function chains within the dataset. OrcBench can use these to create workloads that mimic complete serverless applications, which includes simulating CPU and memory usage. The modeling allows these workloads to be scaled according to the target hardware configuration.","PeriodicalId":54281,"journal":{"name":"IEEE Cloud Computing","volume":"77 1","pages":"103-108"},"PeriodicalIF":0.0,"publicationDate":"2022-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80215616","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Serving distributed inference deep learning models in serverless computing 无服务器计算中分布式推理深度学习模型的服务
Q1 Computer Science Pub Date : 2022-07-01 DOI: 10.1109/CLOUD55607.2022.00029
K. Mahajan, Rumit Desai
Serverless computing (SC) in an attractive win-win paradigm for cloud providers and customers, simultaneously providing greater flexibility and control over resource utilization for cloud providers while reducing costs through pay-per-use model and no capacity management for customers. While SC has been shown effective for event-triggered web applications, the use of deep learning (DL) applications on SC is limited due to latency-sensitive DL applications and stateless SC. In this paper, we focus on two key problems impacting deployment of distributed inference (DI) models on SC: resource allocation and cold start latency. To address the two problems, we propose a hybrid scheduler for identifying the optimal server resource allocation policy. The hybrid scheduler identifies container allocation based on candidate allocations from greedy strategy as well as deep reinforcement learning based allocation model.
无服务器计算(SC)为云提供商和客户提供了一个有吸引力的双赢范例,同时为云提供商提供了更大的灵活性和对资源利用的控制,同时通过按使用付费模式降低了成本,并且为客户提供了无容量管理。虽然SC已被证明对事件触发的web应用程序有效,但由于延迟敏感的DL应用程序和无状态的SC,深度学习(DL)应用程序在SC上的使用受到限制。在本文中,我们关注影响分布式推理(DI)模型在SC上部署的两个关键问题:资源分配和冷启动延迟。为了解决这两个问题,我们提出了一个混合调度器来确定最佳的服务器资源分配策略。混合调度程序基于贪婪策略的候选分配和基于深度强化学习的分配模型来识别容器分配。
{"title":"Serving distributed inference deep learning models in serverless computing","authors":"K. Mahajan, Rumit Desai","doi":"10.1109/CLOUD55607.2022.00029","DOIUrl":"https://doi.org/10.1109/CLOUD55607.2022.00029","url":null,"abstract":"Serverless computing (SC) in an attractive win-win paradigm for cloud providers and customers, simultaneously providing greater flexibility and control over resource utilization for cloud providers while reducing costs through pay-per-use model and no capacity management for customers. While SC has been shown effective for event-triggered web applications, the use of deep learning (DL) applications on SC is limited due to latency-sensitive DL applications and stateless SC. In this paper, we focus on two key problems impacting deployment of distributed inference (DI) models on SC: resource allocation and cold start latency. To address the two problems, we propose a hybrid scheduler for identifying the optimal server resource allocation policy. The hybrid scheduler identifies container allocation based on candidate allocations from greedy strategy as well as deep reinforcement learning based allocation model.","PeriodicalId":54281,"journal":{"name":"IEEE Cloud Computing","volume":"1 1","pages":"109-111"},"PeriodicalIF":0.0,"publicationDate":"2022-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78832423","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Layered Contention Mitigation for Cloud Storage 云存储的分层争用缓解
Q1 Computer Science Pub Date : 2022-07-01 DOI: 10.1109/CLOUD55607.2022.00036
Meng Wang, Cesar A. Stuardo, D. Kurniawan, Ray A. O. Sinurat, Haryadi S. Gunawi
We introduce an ecosystem of contention mitigation supports within the operating system, runtime and library layers. This ecosystem provides an end-to-end request abstraction that enables a uniform type of contention mitigation capabilities, namely request cancellation and delay prediction, that can be stackable together across multiple resource layers. Our evaluation shows that in our ecosystem, multi-resource storage applications are faster by 5-70% starting at 90P (the 90thpercentile) compared to popular practices such as speculative execution and is only 3% slower on average compared to a best-case (no contention) scenario.
我们在操作系统、运行时和库层中引入了一个争用缓解支持生态系统。该生态系统提供端到端请求抽象,支持统一类型的争用缓解功能,即请求取消和延迟预测,可以跨多个资源层堆叠在一起。我们的评估表明,在我们的生态系统中,多资源存储应用程序从90P(第90百分位数)开始比投机执行等流行实践快5-70%,与最佳情况(无争用)场景相比,平均只慢3%。
{"title":"Layered Contention Mitigation for Cloud Storage","authors":"Meng Wang, Cesar A. Stuardo, D. Kurniawan, Ray A. O. Sinurat, Haryadi S. Gunawi","doi":"10.1109/CLOUD55607.2022.00036","DOIUrl":"https://doi.org/10.1109/CLOUD55607.2022.00036","url":null,"abstract":"We introduce an ecosystem of contention mitigation supports within the operating system, runtime and library layers. This ecosystem provides an end-to-end request abstraction that enables a uniform type of contention mitigation capabilities, namely request cancellation and delay prediction, that can be stackable together across multiple resource layers. Our evaluation shows that in our ecosystem, multi-resource storage applications are faster by 5-70% starting at 90P (the 90thpercentile) compared to popular practices such as speculative execution and is only 3% slower on average compared to a best-case (no contention) scenario.","PeriodicalId":54281,"journal":{"name":"IEEE Cloud Computing","volume":"1 1","pages":"167-178"},"PeriodicalIF":0.0,"publicationDate":"2022-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72503857","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
IEEE Cloud Computing
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1