首页 > 最新文献

2018 IEEE International Conference on Cloud Engineering (IC2E)最新文献

英文 中文
Time-Scheduled Network Evaluation Based on Interference 基于干扰的定时网络评估
Pub Date : 2018-05-16 DOI: 10.1109/IC2E.2018.00063
T. Lee, A. Liotta, Georgios Exarchakos
Industrial IoT applications often require both dependability and flexibility from the underlying networks. Restructuring production lines brings topological changes that directly affect the interference levels per link. When a scheduled network, e.g. IEEE802.15.4-TSCH (Time Synchronized Channel Hopping), is used to ensure dependability in low-power networks, rescheduling of transmissions is needed to re-establish effective and reliable end-to-end communication. Typical approaches focus on either centralized or distributed schedulers with little attention drawn on how the chosen solution would perform compared to other solutions or in different topologies. In this work, we introduce the concept of online assessment of TSCH schedules and present an automated method for evaluating schedules taking into consideration the internal interference and conflicts. The network and its TSCH schedule are mapped to a common representation, the interference graph, easy to analyze. Experiment results suggest that this evaluation method reflects the performance of the network when measured by packet reception ratio, end to end delivery ratio, and latency.
工业物联网应用通常需要底层网络的可靠性和灵活性。重组生产线带来的拓扑变化直接影响到每个链路的干扰水平。当在低功耗网络中使用调度网络(例如IEEE802.15.4-TSCH(时间同步信道跳变))来确保可靠性时,需要重新调度传输以重新建立有效可靠的端到端通信。典型的方法关注集中式或分布式调度器,很少关注所选解决方案与其他解决方案或不同拓扑中的执行情况。在这项工作中,我们引入了在线评估TSCH时间表的概念,并提出了一种考虑内部干扰和冲突的自动评估时间表的方法。将网络及其TSCH调度映射成一种通用的表示形式,即干扰图,便于分析。实验结果表明,该评价方法从数据包接收比、端到端发送比和时延等方面反映了网络的性能。
{"title":"Time-Scheduled Network Evaluation Based on Interference","authors":"T. Lee, A. Liotta, Georgios Exarchakos","doi":"10.1109/IC2E.2018.00063","DOIUrl":"https://doi.org/10.1109/IC2E.2018.00063","url":null,"abstract":"Industrial IoT applications often require both dependability and flexibility from the underlying networks. Restructuring production lines brings topological changes that directly affect the interference levels per link. When a scheduled network, e.g. IEEE802.15.4-TSCH (Time Synchronized Channel Hopping), is used to ensure dependability in low-power networks, rescheduling of transmissions is needed to re-establish effective and reliable end-to-end communication. Typical approaches focus on either centralized or distributed schedulers with little attention drawn on how the chosen solution would perform compared to other solutions or in different topologies. In this work, we introduce the concept of online assessment of TSCH schedules and present an automated method for evaluating schedules taking into consideration the internal interference and conflicts. The network and its TSCH schedule are mapped to a common representation, the interference graph, easy to analyze. Experiment results suggest that this evaluation method reflects the performance of the network when measured by packet reception ratio, end to end delivery ratio, and latency.","PeriodicalId":263348,"journal":{"name":"2018 IEEE International Conference on Cloud Engineering (IC2E)","volume":"268 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-05-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115302489","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Toward Transparent Data Management in Multi-Layer Storage Hierarchy of HPC Systems 高性能计算系统多层存储结构中数据透明管理的研究
Pub Date : 2018-05-16 DOI: 10.1109/IC2E.2018.00046
Bharti Wadhwa, S. Byna, A. Butt
Upcoming exascale high performance computing (HPC) systems are expected to comprise multi-tier storage hierarchy, and thus will necessitate innovative storage and I/O mechanisms. Traditional disk and block-based interfaces and file systems face severe challenges in utilizing capabilities of storage hierarchies due to the lack of hierarchy support and semantic interfaces. Object-based and semantically-rich data abstractions for scientific data management on large scale systems offer a sustainable solution to these challenges. Such data abstractions can also simplify users involvement in data movement. In this paper, we take the first steps of realizing such an object abstraction and explore storage mechanisms for these objects to enhance I/O performance, especially for scientific applications. We explore how an object-based interface can facilitate next generation scalable computing systems by presenting the mapping of data I/O from two real world HPC scientific use cases: a plasma physics simulation code (VPIC) and a cosmology simulation code (HACC). Our storage model stores data objects in different physical organizations to support data movement across layers of memory/storage hierarchy. Our implementation sclaes well to 16K parallel processes, and compared to the state of the art, such as MPI-IO and HDF5, our object-based data abstractions and data placement strategy in multi-level storage hierarchy achieves up to 7× I/O performance improvement for scientific data.
即将到来的百亿亿级高性能计算(HPC)系统预计将包含多层存储层次结构,因此将需要创新的存储和I/O机制。由于缺乏层次结构支持和语义接口,传统的基于磁盘和块的接口和文件系统在利用存储层次结构的能力方面面临严峻挑战。面向大规模系统的科学数据管理的基于对象和语义丰富的数据抽象为应对这些挑战提供了一种可持续的解决方案。这样的数据抽象还可以简化用户对数据移动的参与。在本文中,我们迈出了实现这种对象抽象的第一步,并探索了这些对象的存储机制,以提高I/O性能,特别是对于科学应用。我们通过展示来自两个真实世界HPC科学用例的数据I/O映射来探索基于对象的接口如何促进下一代可扩展计算系统:等离子体物理模拟代码(VPIC)和宇宙学模拟代码(HACC)。我们的存储模型将数据对象存储在不同的物理组织中,以支持跨内存/存储层次结构层的数据移动。我们的实现可以很好地扩展到16K并行进程,并且与MPI-IO和HDF5等最新技术相比,我们基于对象的数据抽象和数据放置策略在多级存储层次结构中实现了高达7倍的科学数据I/O性能改进。
{"title":"Toward Transparent Data Management in Multi-Layer Storage Hierarchy of HPC Systems","authors":"Bharti Wadhwa, S. Byna, A. Butt","doi":"10.1109/IC2E.2018.00046","DOIUrl":"https://doi.org/10.1109/IC2E.2018.00046","url":null,"abstract":"Upcoming exascale high performance computing (HPC) systems are expected to comprise multi-tier storage hierarchy, and thus will necessitate innovative storage and I/O mechanisms. Traditional disk and block-based interfaces and file systems face severe challenges in utilizing capabilities of storage hierarchies due to the lack of hierarchy support and semantic interfaces. Object-based and semantically-rich data abstractions for scientific data management on large scale systems offer a sustainable solution to these challenges. Such data abstractions can also simplify users involvement in data movement. In this paper, we take the first steps of realizing such an object abstraction and explore storage mechanisms for these objects to enhance I/O performance, especially for scientific applications. We explore how an object-based interface can facilitate next generation scalable computing systems by presenting the mapping of data I/O from two real world HPC scientific use cases: a plasma physics simulation code (VPIC) and a cosmology simulation code (HACC). Our storage model stores data objects in different physical organizations to support data movement across layers of memory/storage hierarchy. Our implementation sclaes well to 16K parallel processes, and compared to the state of the art, such as MPI-IO and HDF5, our object-based data abstractions and data placement strategy in multi-level storage hierarchy achieves up to 7× I/O performance improvement for scientific data.","PeriodicalId":263348,"journal":{"name":"2018 IEEE International Conference on Cloud Engineering (IC2E)","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-05-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114434324","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
OPTiC: Opportunistic Graph Processing in Multi-Tenant Clusters OPTiC:多租户集群中的机会图处理
Pub Date : 2018-05-16 DOI: 10.1109/IC2E.2018.00034
Muntasir Raihan Rahman, Indranil Gupta, Akash Kapoor, Haozhen Ding
We present OPTiC, a multi-tenant scheduler intended for distributed graph processing frameworks. OPTiC proposes opportunistic scheduling, whereby queued jobs can be pre-scheduled at cluster nodes when the cluster is fully busy running jobs. This allows overlapping of data ingress with ongoing computation. To pre-schedule wisely, OPTiC's novel contribution is a profile-free and cluster-agnostic approach to compare progress of graph processing jobs. OPTiC is implemented inside Apache Giraph, with YARN underneath. Our experiments with real workload traces and network models show that OPTiC's opportunistic scheduling improves run time (both at the median and at the tail) by 20%-82% compared to baseline multi-tenancy, in a variety of scenarios.
我们提出OPTiC,一个用于分布式图形处理框架的多租户调度器。OPTiC提出机会调度,即当集群完全忙于运行作业时,可以在集群节点上预先调度排队的作业。这允许数据输入与正在进行的计算重叠。为了明智地提前调度,OPTiC的新颖贡献是一种无配置文件和集群不可知的方法来比较图处理作业的进度。OPTiC是在Apache Giraph内部实现的,下面是YARN。我们对真实工作负载跟踪和网络模型的实验表明,在各种场景下,与基线多租户相比,OPTiC的机会调度将运行时间(在中位数和尾部)提高了20%-82%。
{"title":"OPTiC: Opportunistic Graph Processing in Multi-Tenant Clusters","authors":"Muntasir Raihan Rahman, Indranil Gupta, Akash Kapoor, Haozhen Ding","doi":"10.1109/IC2E.2018.00034","DOIUrl":"https://doi.org/10.1109/IC2E.2018.00034","url":null,"abstract":"We present OPTiC, a multi-tenant scheduler intended for distributed graph processing frameworks. OPTiC proposes opportunistic scheduling, whereby queued jobs can be pre-scheduled at cluster nodes when the cluster is fully busy running jobs. This allows overlapping of data ingress with ongoing computation. To pre-schedule wisely, OPTiC's novel contribution is a profile-free and cluster-agnostic approach to compare progress of graph processing jobs. OPTiC is implemented inside Apache Giraph, with YARN underneath. Our experiments with real workload traces and network models show that OPTiC's opportunistic scheduling improves run time (both at the median and at the tail) by 20%-82% compared to baseline multi-tenancy, in a variety of scenarios.","PeriodicalId":263348,"journal":{"name":"2018 IEEE International Conference on Cloud Engineering (IC2E)","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-05-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125592920","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Serverless Computing: An Investigation of Factors Influencing Microservice Performance 无服务器计算:影响微服务性能的因素研究
Pub Date : 2018-04-17 DOI: 10.1109/IC2E.2018.00039
W. Lloyd, S. Ramesh, Swetha Chinthalapati, Lan Ly, S. Pallickara
Serverless computing platforms provide function(s)-as-a-Service (FaaS) to end users while promising reduced hosting costs, high availability, fault tolerance, and dynamic elasticity for hosting individual functions known as microservices. Serverless Computing environments, unlike Infrastructure-as-a-Service (IaaS) cloud platforms, abstract infrastructure management including creation of virtual machines (VMs), operating system containers, and request load balancing from users. To conserve cloud server capacity and energy, cloud providers allow hosting infrastructure to go COLD, deprovisioning containers when service demand is low freeing infrastructure to be harnessed by others. In this paper, we present results from our comprehensive investigation into the factors which influence microservice performance afforded by serverless computing. We examine hosting implications related to infrastructure elasticity, load balancing, provisioning variation, infrastructure retention, and memory reservation size. We identify four states of serverless infrastructure including: provider cold, VM cold, container cold, and warm and demonstrate how microservice performance varies up to 15x based on these states.
无服务器计算平台为最终用户提供功能即服务(FaaS),同时承诺降低托管成本、高可用性、容错和动态弹性,用于托管被称为微服务的单个功能。与基础设施即服务(IaaS)云平台不同,无服务器计算环境抽象了基础设施管理,包括虚拟机(vm)的创建、操作系统容器和来自用户的请求负载平衡。为了节省云服务器的容量和能源,云提供商允许托管基础设施冷掉,在服务需求较低时取消配置容器,从而将基础设施释放出来供其他人使用。在本文中,我们展示了我们对影响无服务器计算提供的微服务性能的因素的综合调查结果。我们将研究与基础设施弹性、负载平衡、供应变化、基础设施保留和内存保留大小相关的托管含义。我们确定了无服务器基础架构的四种状态,包括:提供商冷、虚拟机冷、容器冷和热,并演示了微服务性能如何在这些状态下变化15倍。
{"title":"Serverless Computing: An Investigation of Factors Influencing Microservice Performance","authors":"W. Lloyd, S. Ramesh, Swetha Chinthalapati, Lan Ly, S. Pallickara","doi":"10.1109/IC2E.2018.00039","DOIUrl":"https://doi.org/10.1109/IC2E.2018.00039","url":null,"abstract":"Serverless computing platforms provide function(s)-as-a-Service (FaaS) to end users while promising reduced hosting costs, high availability, fault tolerance, and dynamic elasticity for hosting individual functions known as microservices. Serverless Computing environments, unlike Infrastructure-as-a-Service (IaaS) cloud platforms, abstract infrastructure management including creation of virtual machines (VMs), operating system containers, and request load balancing from users. To conserve cloud server capacity and energy, cloud providers allow hosting infrastructure to go COLD, deprovisioning containers when service demand is low freeing infrastructure to be harnessed by others. In this paper, we present results from our comprehensive investigation into the factors which influence microservice performance afforded by serverless computing. We examine hosting implications related to infrastructure elasticity, load balancing, provisioning variation, infrastructure retention, and memory reservation size. We identify four states of serverless infrastructure including: provider cold, VM cold, container cold, and warm and demonstrate how microservice performance varies up to 15x based on these states.","PeriodicalId":263348,"journal":{"name":"2018 IEEE International Conference on Cloud Engineering (IC2E)","volume":"109 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-04-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132542635","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 214
Scalable Key Management for Distributed Cloud Storage 分布式云存储的可扩展密钥管理
Pub Date : 2018-04-17 DOI: 10.1109/IC2E.2018.00051
Mathias Björkqvist, C. Cachin, Felix Engelmann, A. Sorniotti
As use of cryptography increases in all areas of computing, efficient solutions for key management in distributed systems are needed. Large deployments in the cloud can require millions of keys for thousands of clients. The current approaches for serving keys are centralized components, which do not scale as desired. This work reports on the realization of a key manager that uses an untrusted distributed key-value store (KVS) and offers consistent key distribution over the Key-Management Interoperability Protocol (KMIP). To achieve confidentiality, it uses a key hierarchy where every key except a root key itself is encrypted by the respective parent key. The hierarchy also allows for key rotation and, ultimately, for secure deletion of data. The design permits key rotation to proceed concurrently with key-serving operations. A prototype was integrated with IBM Spectrum Scale, a highly scalable cluster file system, where it serves keys for file encryption. Linear scalability was achieved even under load from concurrent key updates. The implementation shows that the approach is viable, works as intended, and suitable for high-throughput key serving in cloud platforms.
随着密码学在所有计算领域的应用不断增加,分布式系统中需要有效的密钥管理解决方案。云中的大型部署可能需要为数千个客户机提供数百万个密钥。当前提供密钥的方法是集中式组件,不能按预期进行扩展。这项工作报告了一个密钥管理器的实现,该管理器使用不可信的分布式键值存储(KVS),并通过密钥管理互操作性协议(KMIP)提供一致的密钥分发。为了实现机密性,它使用密钥层次结构,其中除根密钥本身外的每个密钥都由各自的父密钥加密。层次结构还允许键轮换,并最终实现数据的安全删除。这种设计允许键轮换与键服务操作同时进行。一个原型与IBM Spectrum Scale集成,这是一个高度可扩展的集群文件系统,它为文件加密提供密钥。即使在并发键更新的负载下,也可以实现线性可伸缩性。实现表明,该方法是可行的,按预期工作,并且适合云平台中的高吞吐量密钥服务。
{"title":"Scalable Key Management for Distributed Cloud Storage","authors":"Mathias Björkqvist, C. Cachin, Felix Engelmann, A. Sorniotti","doi":"10.1109/IC2E.2018.00051","DOIUrl":"https://doi.org/10.1109/IC2E.2018.00051","url":null,"abstract":"As use of cryptography increases in all areas of computing, efficient solutions for key management in distributed systems are needed. Large deployments in the cloud can require millions of keys for thousands of clients. The current approaches for serving keys are centralized components, which do not scale as desired. This work reports on the realization of a key manager that uses an untrusted distributed key-value store (KVS) and offers consistent key distribution over the Key-Management Interoperability Protocol (KMIP). To achieve confidentiality, it uses a key hierarchy where every key except a root key itself is encrypted by the respective parent key. The hierarchy also allows for key rotation and, ultimately, for secure deletion of data. The design permits key rotation to proceed concurrently with key-serving operations. A prototype was integrated with IBM Spectrum Scale, a highly scalable cluster file system, where it serves keys for file encryption. Linear scalability was achieved even under load from concurrent key updates. The implementation shows that the approach is viable, works as intended, and suitable for high-throughput key serving in cloud platforms.","PeriodicalId":263348,"journal":{"name":"2018 IEEE International Conference on Cloud Engineering (IC2E)","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-04-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124188356","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
EMMA: Distributed QoS-Aware MQTT Middleware for Edge Computing Applications 面向边缘计算应用的分布式qos感知MQTT中间件
Pub Date : 2018-04-17 DOI: 10.1109/IC2E.2018.00043
T. Rausch, Stefan Nastic, S. Dustdar
Publish–subscribe middleware is a popular technology for facilitating device-to-device communication in large-scale distributed Internet of Things (IoT) scenarios. However, the stringent quality of service (QoS) requirements imposed by many applications cannot be met by cloud-based solutions alone. Edge computing is considered a key enabler for such applications. Client mobility and dynamic resource availability are prominent challenges in edge computing architectures. In this paper, we present EMMA, an edge-enabled publish–subscribe middleware that addresses these challenges. EMMA continuously monitors network QoS and orchestrates a network of MQTT protocol brokers. It transparently migrates MQTT clients to brokers in close proximity to optimize QoS. Experiments in a real-world testbed show that EMMA can significantly reduce end-to-end latencies that incur from network link usage, even in the face of client mobility and unpredictable resource availability.
发布-订阅中间件是一种在大规模分布式物联网(IoT)场景中促进设备对设备通信的流行技术。然而,许多应用程序强加的严格的服务质量(QoS)要求不能仅由基于云的解决方案来满足。边缘计算被认为是这类应用的关键推动者。客户端移动性和动态资源可用性是边缘计算架构中的突出挑战。在本文中,我们介绍了EMMA,这是一种支持边缘的发布-订阅中间件,可以解决这些挑战。EMMA持续监控网络QoS并编排MQTT协议代理网络。它透明地将MQTT客户机迁移到邻近的代理,以优化QoS。实际测试平台中的实验表明,EMMA可以显著减少由网络链路使用引起的端到端延迟,即使面对客户端移动性和不可预测的资源可用性也是如此。
{"title":"EMMA: Distributed QoS-Aware MQTT Middleware for Edge Computing Applications","authors":"T. Rausch, Stefan Nastic, S. Dustdar","doi":"10.1109/IC2E.2018.00043","DOIUrl":"https://doi.org/10.1109/IC2E.2018.00043","url":null,"abstract":"Publish–subscribe middleware is a popular technology for facilitating device-to-device communication in large-scale distributed Internet of Things (IoT) scenarios. However, the stringent quality of service (QoS) requirements imposed by many applications cannot be met by cloud-based solutions alone. Edge computing is considered a key enabler for such applications. Client mobility and dynamic resource availability are prominent challenges in edge computing architectures. In this paper, we present EMMA, an edge-enabled publish–subscribe middleware that addresses these challenges. EMMA continuously monitors network QoS and orchestrates a network of MQTT protocol brokers. It transparently migrates MQTT clients to brokers in close proximity to optimize QoS. Experiments in a real-world testbed show that EMMA can significantly reduce end-to-end latencies that incur from network link usage, even in the face of client mobility and unpredictable resource availability.","PeriodicalId":263348,"journal":{"name":"2018 IEEE International Conference on Cloud Engineering (IC2E)","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-04-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129392190","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 66
Geospatial Analytics in the Large for Monitoring Depth of Cover for Buried Pipeline Infrastructure 地埋管道覆盖深度监测的大型地理空间分析
Pub Date : 2018-04-17 DOI: 10.1109/IC2E.2018.00049
Michael Hornacek, D. Schall, Philipp Glira, Sebastian Geiger, Andreas Egger, Andrei Filip, C. Windisch, Mike Liepe
Operators of pipeline infrastructure buried underground are in many countries required to ensure that depth of cover—a measure of the quantity of soil covering a pipeline—lie within prescribed bounds. Traditionally, monitoring depth of cover at scale has been carried out qualitatively by means of visual inspection. We proceed instead to rely on airborne remote sensing techniques to obtain densely sampled ground surface point measurements from the pipeline's right of way, from which we determine depth of cover using automated algorithms. Proceeding in our manner presents a reproducible, quantitative approach to monitoring depth of cover, yet the demands thus made by the scale of real-world pipeline monitoring scenarios on compute and storage resources can be substantial. We show that the scalability afforded by the cloud can be leveraged to address such scenarios, distributing the algorithms we employ to take advantage of multiple compute nodes and exploiting elastic storage. While the use case underlying this paper is monitoring depth of cover, our proposed architecture can be applied more broadly to a wide variety of geospatial analytics tasks carried out 'in the large', including change detection, semantic classification or segmentation, or computation of vegetation indices.
在许多国家,埋在地下的管道基础设施的运营商都被要求确保覆盖深度——管道覆盖土壤数量的衡量标准——在规定的范围内。传统上,覆盖深度的监测是通过目测进行定性的。我们转而依靠航空遥感技术,从管道的通行权上获得密集采样的地面点测量,并利用自动算法确定覆盖深度。以我们的方式进行,提供了一种可重复的、定量的方法来监测覆盖深度,然而,现实世界管道监测场景的规模对计算和存储资源的需求可能是巨大的。我们展示了可以利用云提供的可伸缩性来解决这样的场景,分布我们用来利用多个计算节点和利用弹性存储的算法。虽然本文的用例是监测覆盖深度,但我们提出的架构可以更广泛地应用于各种“大规模”的地理空间分析任务,包括变化检测、语义分类或分割,或植被指数的计算。
{"title":"Geospatial Analytics in the Large for Monitoring Depth of Cover for Buried Pipeline Infrastructure","authors":"Michael Hornacek, D. Schall, Philipp Glira, Sebastian Geiger, Andreas Egger, Andrei Filip, C. Windisch, Mike Liepe","doi":"10.1109/IC2E.2018.00049","DOIUrl":"https://doi.org/10.1109/IC2E.2018.00049","url":null,"abstract":"Operators of pipeline infrastructure buried underground are in many countries required to ensure that depth of cover—a measure of the quantity of soil covering a pipeline—lie within prescribed bounds. Traditionally, monitoring depth of cover at scale has been carried out qualitatively by means of visual inspection. We proceed instead to rely on airborne remote sensing techniques to obtain densely sampled ground surface point measurements from the pipeline's right of way, from which we determine depth of cover using automated algorithms. Proceeding in our manner presents a reproducible, quantitative approach to monitoring depth of cover, yet the demands thus made by the scale of real-world pipeline monitoring scenarios on compute and storage resources can be substantial. We show that the scalability afforded by the cloud can be leveraged to address such scenarios, distributing the algorithms we employ to take advantage of multiple compute nodes and exploiting elastic storage. While the use case underlying this paper is monitoring depth of cover, our proposed architecture can be applied more broadly to a wide variety of geospatial analytics tasks carried out 'in the large', including change detection, semantic classification or segmentation, or computation of vegetation indices.","PeriodicalId":263348,"journal":{"name":"2018 IEEE International Conference on Cloud Engineering (IC2E)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-04-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128572319","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Tracking Causal Order in AWS Lambda Applications 跟踪AWS Lambda应用程序中的因果顺序
Pub Date : 2018-04-17 DOI: 10.1109/IC2E.2018.00027
Wei-Tsung Lin, C. Krintz, R. Wolski, Michael Zhang, Xiaogang Cai, Tongjun Li, W. Xu
Serverless computing is a new cloud programming and deployment paradigm that is receiving wide-spread uptake. Serverless offerings such as Amazon Web Services (AWS) Lambda, Google Functions, and Azure Functions automatically execute simple functions uploaded by developers, in response to cloud-based event triggers. The serverless abstraction greatly simplifies integration of concurrency and parallelism into cloud applications, and enables deployment of scalable distributed systems and services at very low cost. Although a significant first step, the serverless abstraction requires tools that software engineers can use to reason about, debug, and optimize their increasingly complex, asynchronous applications. Toward this end, we investigate the design and implementation of GammaRay, a cloud service that extracts causal dependencies across functions and through cloud services, without programmer intervention. We implement GammaRay for AWS Lambda and evaluate the overheads that it introduces for serverless micro-benchmarks and applications written in Python.
无服务器计算是一种新的云编程和部署范例,正在得到广泛采用。Amazon Web Services (AWS) Lambda、谷歌Functions和Azure Functions等无服务器产品自动执行开发人员上传的简单函数,以响应基于云的事件触发器。无服务器抽象极大地简化了将并发性和并行性集成到云应用程序中,并支持以非常低的成本部署可扩展的分布式系统和服务。尽管这是重要的第一步,但无服务器抽象需要软件工程师可以用来推理、调试和优化其日益复杂的异步应用程序的工具。为此,我们研究了GammaRay的设计和实现,GammaRay是一种云服务,可以在没有程序员干预的情况下提取跨功能和通过云服务的因果依赖关系。我们为AWS Lambda实现了GammaRay,并评估了它为无服务器微基准测试和用Python编写的应用程序引入的开销。
{"title":"Tracking Causal Order in AWS Lambda Applications","authors":"Wei-Tsung Lin, C. Krintz, R. Wolski, Michael Zhang, Xiaogang Cai, Tongjun Li, W. Xu","doi":"10.1109/IC2E.2018.00027","DOIUrl":"https://doi.org/10.1109/IC2E.2018.00027","url":null,"abstract":"Serverless computing is a new cloud programming and deployment paradigm that is receiving wide-spread uptake. Serverless offerings such as Amazon Web Services (AWS) Lambda, Google Functions, and Azure Functions automatically execute simple functions uploaded by developers, in response to cloud-based event triggers. The serverless abstraction greatly simplifies integration of concurrency and parallelism into cloud applications, and enables deployment of scalable distributed systems and services at very low cost. Although a significant first step, the serverless abstraction requires tools that software engineers can use to reason about, debug, and optimize their increasingly complex, asynchronous applications. Toward this end, we investigate the design and implementation of GammaRay, a cloud service that extracts causal dependencies across functions and through cloud services, without programmer intervention. We implement GammaRay for AWS Lambda and evaluate the overheads that it introduces for serverless micro-benchmarks and applications written in Python.","PeriodicalId":263348,"journal":{"name":"2018 IEEE International Conference on Cloud Engineering (IC2E)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-04-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129658603","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 27
Container-Based Virtualization for Heterogeneous HPC Clouds: Insights from the EU H2020 CloudLightning Project 异构高性能计算云的基于容器的虚拟化:来自EU H2020 CloudLightning项目的见解
Pub Date : 2018-04-17 DOI: 10.1109/IC2E.2018.00074
M. Khan, Tobias Becker, Perumal Kuppuudaiyar, A. Elster
Building and successfully deploying applications on high-end heterogeneous resources such as GPUs, MICs or FPGAs, and typically with several library dependencies, is a complex task. Modern containerization provides a lightweight virtualization environment which can help solve some of these complex deployment and execution issues. By using containers, the software can be packaged with all the dependencies and tested in a single environment and deployable on heterogeneous architectures, easily. In this paper, we present our experiences with the container-based virtualized solutions that we developed for the use case applications in the EU H2020 project CloudLightning. We present specifics on their management and orchestration with specific software on the heterogeneous resources of our test-bed. The use cases include a Genomics application targeting FPGA-based DFEs, an Upscaling application for reservoir modeling, a ray tracing application targeting the MIC (Intel Xeon Phi co-processor), and a BLAS application with libraries, optimized for both CPU and GPU. An overview of the CloudLightning project and how the use case applications have been developed to be used as cloud services in the self-organizing, self-managing cloud technology, is also included.
在高端异构资源(如gpu、mic或fpga)上构建和成功部署应用程序是一项复杂的任务,并且通常具有多个库依赖项。现代容器化提供了一个轻量级的虚拟化环境,可以帮助解决其中一些复杂的部署和执行问题。通过使用容器,可以将软件与所有依赖项打包,并在单一环境中进行测试,并且可以轻松地部署在异构架构上。在本文中,我们介绍了我们为EU H2020项目CloudLightning中的用例应用程序开发的基于容器的虚拟化解决方案的经验。我们详细介绍了在测试平台的异构资源上对特定软件的管理和编排。用例包括针对基于fpga的dfe的Genomics应用程序,针对油藏建模的Upscaling应用程序,针对MIC (Intel Xeon Phi协处理器)的光线追踪应用程序,以及针对CPU和GPU进行优化的带有库的BLAS应用程序。本文还概述了CloudLightning项目,以及如何开发用例应用程序作为自组织、自管理云技术中的云服务。
{"title":"Container-Based Virtualization for Heterogeneous HPC Clouds: Insights from the EU H2020 CloudLightning Project","authors":"M. Khan, Tobias Becker, Perumal Kuppuudaiyar, A. Elster","doi":"10.1109/IC2E.2018.00074","DOIUrl":"https://doi.org/10.1109/IC2E.2018.00074","url":null,"abstract":"Building and successfully deploying applications on high-end heterogeneous resources such as GPUs, MICs or FPGAs, and typically with several library dependencies, is a complex task. Modern containerization provides a lightweight virtualization environment which can help solve some of these complex deployment and execution issues. By using containers, the software can be packaged with all the dependencies and tested in a single environment and deployable on heterogeneous architectures, easily. In this paper, we present our experiences with the container-based virtualized solutions that we developed for the use case applications in the EU H2020 project CloudLightning. We present specifics on their management and orchestration with specific software on the heterogeneous resources of our test-bed. The use cases include a Genomics application targeting FPGA-based DFEs, an Upscaling application for reservoir modeling, a ray tracing application targeting the MIC (Intel Xeon Phi co-processor), and a BLAS application with libraries, optimized for both CPU and GPU. An overview of the CloudLightning project and how the use case applications have been developed to be used as cloud services in the self-organizing, self-managing cloud technology, is also included.","PeriodicalId":263348,"journal":{"name":"2018 IEEE International Conference on Cloud Engineering (IC2E)","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-04-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124732483","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
Deadline-Aware Scheduling and Routing for Inter-Datacenter Multicast Transfers 数据中心间多播传输的截止日期感知调度和路由
Pub Date : 2018-04-17 DOI: 10.1109/IC2E.2018.00035
Siqi Ji, Shuhao Liu, Baochun Li
Many applications like geo-replication need to deliver multiple copies of data from a single datacenter to multiple datacenters, which has benefits of improving fault tolerance, increasing availability and achieving high service quality. These applications usually require completing multicast transfers before certain deadlines. Some of the existing works only consider unicast transfers, which is not appropriate for the multicast transmission type. An alternative approach proposed by existing works was to find a minimum weight Steiner tree for each transfer. Instead of using only one tree for each transfer, we propose to use one or multiple trees, which increases the flexibility of routing, improves the utilization of available bandwidth, and increases the throughput for each transfer. In this paper, we focus on the multicast transmission type, propose an efficient and effective solution that maximizes throughput for all transfer requests while meeting deadlines. We also show that our solution can reduce packet reordering by selecting very few Steiner trees for each transfer. We have implemented our solution on a software-defined overlay network at the application layer, and our real-world experiments on the Google Cloud Platform have shown that our system effectively improves the network throughput performance and has a lower traffic rejection rate compared to existing related works.
许多应用程序(如地理复制)需要将数据的多个副本从一个数据中心传递到多个数据中心,这样可以提高容错性、提高可用性和实现高服务质量。这些应用程序通常需要在特定的截止日期之前完成多播传输。现有的一些工作只考虑单播传输,不适合组播传输类型。现有工作提出的另一种方法是为每次转移找到最小权重的斯坦纳树。我们建议使用一个或多个树来代替每次传输只使用一个树,这增加了路由的灵活性,提高了可用带宽的利用率,并增加了每次传输的吞吐量。本文以组播传输类型为研究对象,提出一种高效且有效的解决方案,在满足传输要求的前提下,最大限度地提高传输请求的吞吐量。我们还证明了我们的解决方案可以通过为每次传输选择很少的斯坦纳树来减少数据包重排序。我们在应用层的软件定义覆盖网络上实现了我们的解决方案,我们在Google Cloud平台上的实际实验表明,我们的系统有效地提高了网络吞吐量性能,与现有的相关工作相比,我们的系统具有更低的流量拒绝率。
{"title":"Deadline-Aware Scheduling and Routing for Inter-Datacenter Multicast Transfers","authors":"Siqi Ji, Shuhao Liu, Baochun Li","doi":"10.1109/IC2E.2018.00035","DOIUrl":"https://doi.org/10.1109/IC2E.2018.00035","url":null,"abstract":"Many applications like geo-replication need to deliver multiple copies of data from a single datacenter to multiple datacenters, which has benefits of improving fault tolerance, increasing availability and achieving high service quality. These applications usually require completing multicast transfers before certain deadlines. Some of the existing works only consider unicast transfers, which is not appropriate for the multicast transmission type. An alternative approach proposed by existing works was to find a minimum weight Steiner tree for each transfer. Instead of using only one tree for each transfer, we propose to use one or multiple trees, which increases the flexibility of routing, improves the utilization of available bandwidth, and increases the throughput for each transfer. In this paper, we focus on the multicast transmission type, propose an efficient and effective solution that maximizes throughput for all transfer requests while meeting deadlines. We also show that our solution can reduce packet reordering by selecting very few Steiner trees for each transfer. We have implemented our solution on a software-defined overlay network at the application layer, and our real-world experiments on the Google Cloud Platform have shown that our system effectively improves the network throughput performance and has a lower traffic rejection rate compared to existing related works.","PeriodicalId":263348,"journal":{"name":"2018 IEEE International Conference on Cloud Engineering (IC2E)","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-04-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131951342","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 22
期刊
2018 IEEE International Conference on Cloud Engineering (IC2E)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1