首页 > 最新文献

2016 16th IEEE/ACM International Symposium on Cluster, Cloud and Grid Computing (CCGrid)最新文献

英文 中文
Fogbow: A Middleware for the Federation of IaaS Clouds Fogbow: IaaS云联盟的中间件
F. Brasileiro, G. Silva, Francisco Araujo, Marcos Nobrega, Igor Silva, Gustavo Rocha
This paper presents a new middleware, called Fogbow, designed to support large federations of Infrastructure-as-a-service (IaaS) cloud providers. Fogbow follows a novel approach that implements federation functionalities outside the cloud orchestrator. This approach provides great flexibility, since it can use plug-ins that allow for the definition of precise interaction points between the federation middleware and the underlying cloud orchestrator. The resulting architecture, which relies on standards for conciliating different orchestrators' peculiarities, is thereby able to provide a common API to decouple federation functionalities from the orchestrator functionalities. In the demonstration we will showcase how Fogbow has been used to implement several cloud federations, with different requirements.
本文介绍了一种名为Fogbow的新中间件,旨在支持大型基础设施即服务(IaaS)云提供商联盟。Fogbow采用了一种新颖的方法,在云编排器之外实现联邦功能。这种方法提供了极大的灵活性,因为它可以使用允许在联邦中间件和底层云编排器之间定义精确交互点的插件。由此产生的体系结构依赖于协调不同协调器特性的标准,因此能够提供一个通用的API来将联邦功能与协调器功能解耦。在演示中,我们将展示如何使用Fogbow来实现具有不同需求的几个云联盟。
{"title":"Fogbow: A Middleware for the Federation of IaaS Clouds","authors":"F. Brasileiro, G. Silva, Francisco Araujo, Marcos Nobrega, Igor Silva, Gustavo Rocha","doi":"10.1109/CCGrid.2016.12","DOIUrl":"https://doi.org/10.1109/CCGrid.2016.12","url":null,"abstract":"This paper presents a new middleware, called Fogbow, designed to support large federations of Infrastructure-as-a-service (IaaS) cloud providers. Fogbow follows a novel approach that implements federation functionalities outside the cloud orchestrator. This approach provides great flexibility, since it can use plug-ins that allow for the definition of precise interaction points between the federation middleware and the underlying cloud orchestrator. The resulting architecture, which relies on standards for conciliating different orchestrators' peculiarities, is thereby able to provide a common API to decouple federation functionalities from the orchestrator functionalities. In the demonstration we will showcase how Fogbow has been used to implement several cloud federations, with different requirements.","PeriodicalId":103641,"journal":{"name":"2016 16th IEEE/ACM International Symposium on Cluster, Cloud and Grid Computing (CCGrid)","volume":"70 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-05-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132823725","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 17
Reusing Resource Coalitions for Efficient Scheduling on the Intercloud 重用资源联盟实现云间高效调度
Teodora Selea, Adrian F. Spataru, M. Frîncu
The envisioned intercloud bridging numerous cloud providers offering clients the ability to run their applications on specific configurations unavailable to single clouds poses challenges with respect to selecting the appropriate resources for deploying VMs. Reasons include the large distributed scale and VM performance fluctuations. Reusing previously "successful" resource coalitions may be an alternative to a brute force search employed by many existing scheduling algorithms. The reason for reusing resources is motivated by an implicit trust in previous successful executions that have not experienced VM performance fluctuations described in many research papers on cloud performance. Furthermore, the data deluge coming from services monitoring the load and availability of resources forces a shift in traditional centralized and decentralized resource management by emphasizing the need for edge computing. In this way only meta data is sent to the resource management system for resource matchmaking. In this paper we propose a bottom-up monitoring architecture and a proof-of-concept platform for scheduling applications based on resource coalition reuse. We consider static coalitions and neglect any interference from other coalitions by considering only the historical behavior of a particular coalition and not the overall state of the system in the past and now. We test our prototype on real traces by comparing with a random approach and discuss the results by outlying its benefits as well as some future work on run time coalition adaptation and global influences.
设想中的跨云桥接众多云提供商,为客户提供在单个云不可用的特定配置上运行其应用程序的能力,这在为部署vm选择适当的资源方面提出了挑战。原因包括分布式规模大、虚拟机性能波动等。重用以前“成功”的资源联盟可能是许多现有调度算法所采用的蛮力搜索的替代方案。重用资源的原因是对以前成功执行的隐式信任,这些执行没有经历许多关于云性能的研究论文中描述的VM性能波动。此外,通过强调对边缘计算的需求,来自监控资源负载和可用性的服务的数据洪流迫使传统的集中式和分散式资源管理发生转变。这样,只有元数据被发送到资源管理系统进行资源匹配。本文提出了一种基于资源联盟复用的自底向上的监控体系结构和概念验证平台。我们考虑静态联盟,忽略来自其他联盟的干扰,只考虑特定联盟的历史行为,而不是过去和现在系统的整体状态。通过与随机方法的比较,我们在真实的轨迹上测试了我们的原型,并讨论了它的好处,以及一些关于运行时联盟适应和全局影响的未来工作的结果。
{"title":"Reusing Resource Coalitions for Efficient Scheduling on the Intercloud","authors":"Teodora Selea, Adrian F. Spataru, M. Frîncu","doi":"10.1109/CCGrid.2016.45","DOIUrl":"https://doi.org/10.1109/CCGrid.2016.45","url":null,"abstract":"The envisioned intercloud bridging numerous cloud providers offering clients the ability to run their applications on specific configurations unavailable to single clouds poses challenges with respect to selecting the appropriate resources for deploying VMs. Reasons include the large distributed scale and VM performance fluctuations. Reusing previously \"successful\" resource coalitions may be an alternative to a brute force search employed by many existing scheduling algorithms. The reason for reusing resources is motivated by an implicit trust in previous successful executions that have not experienced VM performance fluctuations described in many research papers on cloud performance. Furthermore, the data deluge coming from services monitoring the load and availability of resources forces a shift in traditional centralized and decentralized resource management by emphasizing the need for edge computing. In this way only meta data is sent to the resource management system for resource matchmaking. In this paper we propose a bottom-up monitoring architecture and a proof-of-concept platform for scheduling applications based on resource coalition reuse. We consider static coalitions and neglect any interference from other coalitions by considering only the historical behavior of a particular coalition and not the overall state of the system in the past and now. We test our prototype on real traces by comparing with a random approach and discuss the results by outlying its benefits as well as some future work on run time coalition adaptation and global influences.","PeriodicalId":103641,"journal":{"name":"2016 16th IEEE/ACM International Symposium on Cluster, Cloud and Grid Computing (CCGrid)","volume":"73 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-05-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129281273","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
A Quality-Driven Approach for Building Heterogeneous Distributed Databases: The Case of Data Warehouses 构建异构分布式数据库的质量驱动方法:以数据仓库为例
Sabrina Abdellaoui, Ladjel Bellatreche, Fahima Nader
Data Warehouse (DW) is a collection of data, consolidated from several heterogeneous sources, used to perform data analysis and support decision making in an organization. Extract-Transform-Load (ETL) phase plays a crucial role in designing DW. To overcome the complexity of the ETL phase, different studies have recently proposed the use of ontologies. Ontology-based ETL approaches have been used to reduce heterogeneity between data sources and ensure automation of the ETL process. Existing studies in semantic ETL have largely focused on fulfilling functional requirements. However, the ETL process quality dimension has not been sufficiently considered by these studies. As the amount of data has exploded with the advent of big data era, dealing with quality challenges in the early stages of designing the process become more important than ever. To address this issue, we propose to keep data quality requirements at the center of the ETL phase design. We present in this paper an approach, defining the ETL process at the ontological level. We define a set of quality indicators and quantitative measures that can anticipate data quality problems and identify causes of deficiencies. Our approach checks the quality of data before loading them into the target data warehouse to avoid the propagation of corrupted data. Finally, our proposal is validated through a case study, using Oracle Semantic DataBase sources (SDBs), where each source references the Lehigh University BenchMark ontology (LUBM).
数据仓库(Data Warehouse, DW)是来自多个异构源的数据集合,用于执行数据分析并支持组织中的决策制定。提取-转换-加载(ETL)阶段在DW设计中起着至关重要的作用。为了克服ETL阶段的复杂性,最近有不同的研究提出使用本体。基于本体的ETL方法已被用于减少数据源之间的异构性,并确保ETL过程的自动化。现有的语义ETL研究主要集中在功能需求的实现上。然而,这些研究并未充分考虑到ETL过程质量维度。随着大数据时代的到来,数据量呈爆炸式增长,在设计流程的早期阶段处理质量挑战变得比以往任何时候都更加重要。为了解决这个问题,我们建议将数据质量需求放在ETL阶段设计的中心。我们在本文中提出了一种在本体论层面上定义ETL过程的方法。我们定义了一套质量指标和量化措施,可以预测数据质量问题并确定缺陷的原因。我们的方法在将数据加载到目标数据仓库之前检查数据的质量,以避免损坏数据的传播。最后,通过使用Oracle语义数据库源(sdb)的案例研究验证了我们的建议,其中每个源都引用Lehigh University BenchMark本体(LUBM)。
{"title":"A Quality-Driven Approach for Building Heterogeneous Distributed Databases: The Case of Data Warehouses","authors":"Sabrina Abdellaoui, Ladjel Bellatreche, Fahima Nader","doi":"10.1109/CCGrid.2016.79","DOIUrl":"https://doi.org/10.1109/CCGrid.2016.79","url":null,"abstract":"Data Warehouse (DW) is a collection of data, consolidated from several heterogeneous sources, used to perform data analysis and support decision making in an organization. Extract-Transform-Load (ETL) phase plays a crucial role in designing DW. To overcome the complexity of the ETL phase, different studies have recently proposed the use of ontologies. Ontology-based ETL approaches have been used to reduce heterogeneity between data sources and ensure automation of the ETL process. Existing studies in semantic ETL have largely focused on fulfilling functional requirements. However, the ETL process quality dimension has not been sufficiently considered by these studies. As the amount of data has exploded with the advent of big data era, dealing with quality challenges in the early stages of designing the process become more important than ever. To address this issue, we propose to keep data quality requirements at the center of the ETL phase design. We present in this paper an approach, defining the ETL process at the ontological level. We define a set of quality indicators and quantitative measures that can anticipate data quality problems and identify causes of deficiencies. Our approach checks the quality of data before loading them into the target data warehouse to avoid the propagation of corrupted data. Finally, our proposal is validated through a case study, using Oracle Semantic DataBase sources (SDBs), where each source references the Lehigh University BenchMark ontology (LUBM).","PeriodicalId":103641,"journal":{"name":"2016 16th IEEE/ACM International Symposium on Cluster, Cloud and Grid Computing (CCGrid)","volume":"50 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-05-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133175505","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Increasing the Performance of Data Centers by Combining Remote GPU Virtualization with Slurm 远程GPU虚拟化与Slurm相结合,提升数据中心性能
Sergio Iserte, Javier Prades, C. Reaño, F. Silla
The use of Graphics Processing Units (GPUs) presents several side effects, such as increased acquisition costs as well as larger space requirements. Furthermore, GPUs require a non-negligible amount of energy even while idle. Additionally, GPU utilization is usually low for most applications. Using the virtual GPUs provided by the remote GPU virtualization mechanism may address the concerns associated with the use of these devices. However, in the same way as workload managers map GPU resources to applications, virtual GPUs should also be scheduled before job execution. Nevertheless, current workload managers are not able to deal with virtual GPUs. In this paper we analyze the performance attained by a cluster using the rCUDA remote GPU virtualization middleware and a modified version of the Slurm workload manager, which is now able to map remote virtual GPUs to jobs. Results show that cluster throughput is doubled at the same time that total energy consumption is reduced up to 40%. GPU utilization is also increased.
图形处理单元(gpu)的使用带来了一些副作用,例如增加了获取成本以及更大的空间需求。此外,gpu即使在空闲时也需要不可忽略的能量。此外,大多数应用程序的GPU利用率通常很低。使用远程GPU虚拟化机制提供的虚拟GPU可以解决与使用这些设备相关的问题。但是,与工作负载管理器将GPU资源映射到应用程序的方式相同,也应该在作业执行之前调度虚拟GPU。然而,当前的工作负载管理器无法处理虚拟gpu。在本文中,我们分析了使用rCUDA远程GPU虚拟化中间件和修改版本的Slurm工作负载管理器的集群所获得的性能,Slurm工作负载管理器现在能够将远程虚拟GPU映射到作业。结果表明,集群吞吐量增加了一倍,同时总能耗降低了40%。GPU利用率也提高了。
{"title":"Increasing the Performance of Data Centers by Combining Remote GPU Virtualization with Slurm","authors":"Sergio Iserte, Javier Prades, C. Reaño, F. Silla","doi":"10.1109/CCGrid.2016.26","DOIUrl":"https://doi.org/10.1109/CCGrid.2016.26","url":null,"abstract":"The use of Graphics Processing Units (GPUs) presents several side effects, such as increased acquisition costs as well as larger space requirements. Furthermore, GPUs require a non-negligible amount of energy even while idle. Additionally, GPU utilization is usually low for most applications. Using the virtual GPUs provided by the remote GPU virtualization mechanism may address the concerns associated with the use of these devices. However, in the same way as workload managers map GPU resources to applications, virtual GPUs should also be scheduled before job execution. Nevertheless, current workload managers are not able to deal with virtual GPUs. In this paper we analyze the performance attained by a cluster using the rCUDA remote GPU virtualization middleware and a modified version of the Slurm workload manager, which is now able to map remote virtual GPUs to jobs. Results show that cluster throughput is doubled at the same time that total energy consumption is reduced up to 40%. GPU utilization is also increased.","PeriodicalId":103641,"journal":{"name":"2016 16th IEEE/ACM International Symposium on Cluster, Cloud and Grid Computing (CCGrid)","volume":"54 3","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-05-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114102128","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 23
AMRZone: A Runtime AMR Data Sharing Framework for Scientific Applications AMRZone:用于科学应用的运行时AMR数据共享框架
Wenzhao Zhang, Houjun Tang, Steve Harenberg, S. Byna, Xiaocheng Zou, D. Devendran, Daniel F. Martin, Kesheng Wu, Bin Dong, S. Klasky, N. Samatova
Frameworks that facilitate runtime data sharingacross multiple applications are of great importance for scientificdata analytics. Although existing frameworks work well overuniform mesh data, they can not effectively handle adaptive meshrefinement (AMR) data. Among the challenges to construct anAMR-capable framework include: (1) designing an architecturethat facilitates online AMR data management, (2) achievinga load-balanced AMR data distribution for the data stagingspace at runtime, and (3) building an effective online indexto support the unique spatial data retrieval requirements forAMR data. Towards addressing these challenges to supportruntime AMR data sharing across scientific applications, wepresent the AMRZone framework. Experiments over real-worldAMR datasets demonstrate AMRZone's effectiveness at achievinga balanced workload distribution, reading/writing large-scaledatasets with thousands of parallel processes, and satisfyingqueries with spatial constraints. Moreover, AMRZone's performance and scalability are even comparable with existing state-of-the-art work when tested over uniform mesh data with up to16384 cores, in the best case, our framework achieves a 46% performance improvement.
促进跨多个应用程序运行时数据共享的框架对于科学数据分析非常重要。虽然现有框架可以很好地处理均匀网格数据,但它们不能有效地处理自适应网格细化(AMR)数据。构建一个支持AMR的框架面临的挑战包括:(1)设计一个便于在线AMR数据管理的体系结构;(2)在运行时实现数据登台空间的负载均衡的AMR数据分布;(3)构建一个有效的在线索引,以支持AMR数据独特的空间数据检索需求。为了解决这些挑战,支持跨科学应用程序的运行时AMR数据共享,我们提出了AMRZone框架。在真实世界的damr数据集上的实验表明,AMRZone在实现平衡的工作负载分布、读写具有数千个并行进程的大规模数据集以及满足空间约束的查询方面是有效的。此外,AMRZone的性能和可扩展性甚至可以与现有的最先进的工作相媲美,当在多达16384个内核的均匀网格数据上进行测试时,在最好的情况下,我们的框架实现了46%的性能改进。
{"title":"AMRZone: A Runtime AMR Data Sharing Framework for Scientific Applications","authors":"Wenzhao Zhang, Houjun Tang, Steve Harenberg, S. Byna, Xiaocheng Zou, D. Devendran, Daniel F. Martin, Kesheng Wu, Bin Dong, S. Klasky, N. Samatova","doi":"10.1109/CCGrid.2016.62","DOIUrl":"https://doi.org/10.1109/CCGrid.2016.62","url":null,"abstract":"Frameworks that facilitate runtime data sharingacross multiple applications are of great importance for scientificdata analytics. Although existing frameworks work well overuniform mesh data, they can not effectively handle adaptive meshrefinement (AMR) data. Among the challenges to construct anAMR-capable framework include: (1) designing an architecturethat facilitates online AMR data management, (2) achievinga load-balanced AMR data distribution for the data stagingspace at runtime, and (3) building an effective online indexto support the unique spatial data retrieval requirements forAMR data. Towards addressing these challenges to supportruntime AMR data sharing across scientific applications, wepresent the AMRZone framework. Experiments over real-worldAMR datasets demonstrate AMRZone's effectiveness at achievinga balanced workload distribution, reading/writing large-scaledatasets with thousands of parallel processes, and satisfyingqueries with spatial constraints. Moreover, AMRZone's performance and scalability are even comparable with existing state-of-the-art work when tested over uniform mesh data with up to16384 cores, in the best case, our framework achieves a 46% performance improvement.","PeriodicalId":103641,"journal":{"name":"2016 16th IEEE/ACM International Symposium on Cluster, Cloud and Grid Computing (CCGrid)","volume":"48 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-05-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124317301","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Efficient Heuristics for Placing Large-Scale Distributed Applications on Multiple Clouds 在多云上放置大规模分布式应用的高效启发式方法
Pedro Silva, Christian Pérez, F. Desprez
With the fast growth of the demand for Cloud computing services, the Cloud has become a very popular platform to develop distributed applications. Features that in the past were available only to big corporations, like fast scalability, availability, and reliability, are now accessible to any customer, including individuals and small companies, thanks to Cloud computing. In order to place an application, a designer must choose among VM types, from private and public cloud providers, those that are capable of hosting her application or its parts using as criteria application requirements, VM prices, and VM resources. This procedure becomes more complicated when the objective is to place large component based applications on multiple clouds. In this case, the number of possible configurations explodes making necessary the automation of the placement. In this context, scalability has a central role since the placement problem is a generalization of the NP-Hard multi-dimensional bin packing problem. In this paper we propose efficient greedy heuristics based on first fit decreasing and best fit algorithms, which are capable of computing near optimal solutions for very large applications, with the objective of minimizing costs and meeting application performance requirements. Through a meticulous evaluation, we show that the greedy heuristics took a few seconds to calculate near optimal solutions to placements that would require hours or even days when calculated using state of the art solutions, namely exact algorithms or meta-heuristics.
随着云计算服务需求的快速增长,云已经成为开发分布式应用程序的一个非常流行的平台。过去只有大公司才能使用的功能,如快速可伸缩性、可用性和可靠性,现在任何客户都可以使用,包括个人和小公司,这要归功于云计算。为了放置应用程序,设计人员必须从私有和公共云提供商中选择能够托管其应用程序或其部分的VM类型,这些VM类型使用应用程序需求、VM价格和VM资源作为标准。当目标是将基于组件的大型应用程序放置在多个云上时,这个过程变得更加复杂。在这种情况下,可能的配置数量激增,使得自动化放置成为必要。在这种情况下,可扩展性具有核心作用,因为放置问题是NP-Hard多维装箱问题的泛化。本文提出了一种基于首次拟合递减算法和最佳拟合算法的高效贪婪启发式算法,能够计算出非常大的应用程序的近最优解,其目标是最小化成本并满足应用程序的性能要求。通过细致的评估,我们表明贪婪启发式算法只需几秒钟即可计算出接近最优的解决方案,而使用最先进的解决方案(即精确算法或元启发式算法)计算则需要数小时甚至数天。
{"title":"Efficient Heuristics for Placing Large-Scale Distributed Applications on Multiple Clouds","authors":"Pedro Silva, Christian Pérez, F. Desprez","doi":"10.1109/CCGrid.2016.77","DOIUrl":"https://doi.org/10.1109/CCGrid.2016.77","url":null,"abstract":"With the fast growth of the demand for Cloud computing services, the Cloud has become a very popular platform to develop distributed applications. Features that in the past were available only to big corporations, like fast scalability, availability, and reliability, are now accessible to any customer, including individuals and small companies, thanks to Cloud computing. In order to place an application, a designer must choose among VM types, from private and public cloud providers, those that are capable of hosting her application or its parts using as criteria application requirements, VM prices, and VM resources. This procedure becomes more complicated when the objective is to place large component based applications on multiple clouds. In this case, the number of possible configurations explodes making necessary the automation of the placement. In this context, scalability has a central role since the placement problem is a generalization of the NP-Hard multi-dimensional bin packing problem. In this paper we propose efficient greedy heuristics based on first fit decreasing and best fit algorithms, which are capable of computing near optimal solutions for very large applications, with the objective of minimizing costs and meeting application performance requirements. Through a meticulous evaluation, we show that the greedy heuristics took a few seconds to calculate near optimal solutions to placements that would require hours or even days when calculated using state of the art solutions, namely exact algorithms or meta-heuristics.","PeriodicalId":103641,"journal":{"name":"2016 16th IEEE/ACM International Symposium on Cluster, Cloud and Grid Computing (CCGrid)","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-05-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124675762","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 24
Optimizing Massively Parallel Simulations of Infection Spread Through Air-Travel for Policy Analysis 为政策分析优化通过空中旅行传播的大规模并行模拟
A. Srinivasan, C. D. Sudheer, S. Namilae
Project VIPRA [1] uses a new approach to modeling the potential spread of infections in airplanes, which involves tracking detailed movements of individual passengers. Inherent uncertainties are parameterized, and a parameter sweep carried out in this space to identify potential vulnerabilities. Simulation time is a major bottleneck for exploration of 'what-if' scenarios in a policy-making context under real-world time constraints. This paper identifies important bottlenecks to efficient computation: inefficiency in workflow, parallel IO, and load imbalance. Our solutions to the above problems include modifying the workflow, optimizing parallel IO, and a new scheme to predict computational time, which leads to efficient load balancing on fewer nodes than currently required. Our techniques reduce the computational time from several hours on 69,000 cores to around 20 minutes on around 39,000 cores on the Blue Waters machine for the same computation. The significance of this paper lies in identifying performance bottlenecks in this class of applications, which is crucial to public health, and presenting a solution that is effective in practice.
VIPRA项目[1]使用了一种新的方法来模拟飞机上感染的潜在传播,其中包括跟踪单个乘客的详细运动。将固有的不确定性参数化,并在该空间中进行参数扫描,以识别潜在的漏洞。在现实世界的时间限制下,模拟时间是探索决策环境中“假设”场景的主要瓶颈。本文指出了影响高效计算的重要瓶颈:工作流程效率低下、并行IO和负载不平衡。我们对上述问题的解决方案包括修改工作流程,优化并行IO,以及预测计算时间的新方案,该方案可以在比当前所需的更少的节点上实现有效的负载平衡。对于相同的计算,我们的技术将计算时间从69,000个核上的几个小时减少到Blue Waters机器上39,000个核上的20分钟左右。本文的意义在于确定这类应用程序的性能瓶颈,这对公共卫生至关重要,并提出了在实践中有效的解决方案。
{"title":"Optimizing Massively Parallel Simulations of Infection Spread Through Air-Travel for Policy Analysis","authors":"A. Srinivasan, C. D. Sudheer, S. Namilae","doi":"10.1109/CCGrid.2016.23","DOIUrl":"https://doi.org/10.1109/CCGrid.2016.23","url":null,"abstract":"Project VIPRA [1] uses a new approach to modeling the potential spread of infections in airplanes, which involves tracking detailed movements of individual passengers. Inherent uncertainties are parameterized, and a parameter sweep carried out in this space to identify potential vulnerabilities. Simulation time is a major bottleneck for exploration of 'what-if' scenarios in a policy-making context under real-world time constraints. This paper identifies important bottlenecks to efficient computation: inefficiency in workflow, parallel IO, and load imbalance. Our solutions to the above problems include modifying the workflow, optimizing parallel IO, and a new scheme to predict computational time, which leads to efficient load balancing on fewer nodes than currently required. Our techniques reduce the computational time from several hours on 69,000 cores to around 20 minutes on around 39,000 cores on the Blue Waters machine for the same computation. The significance of this paper lies in identifying performance bottlenecks in this class of applications, which is crucial to public health, and presenting a solution that is effective in practice.","PeriodicalId":103641,"journal":{"name":"2016 16th IEEE/ACM International Symposium on Cluster, Cloud and Grid Computing (CCGrid)","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-05-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116764420","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Towards a Resource Manager for Scheduling Frameworks 迈向调度框架的资源管理器
Aleksandra Kuzmanovska, R. H. Mak, D. Epema
Due to the diversity in the applications that run in large distributed environments, many different application frameworks have been developed, such as MapReduce for data-intensive batch jobs and Spark for interactive data analytics. After initial deployment, a framework starts executing a large set of jobs that are submitted over time. When multiple such frameworks with time-varying resource demands are consolidated in a large distributed environment, static allocation of resources on a per-framework basis leads to low system utilization and to resource fragmentation. The goal of my PhD research is to improve the system utilization and framework performances in such consolidated environments by using dynamic resource allocation for efficient resource sharing among frameworks. My contribution towards this goal is a design and an implementation of a scalable resource manager that dynamically balances resources across set of multiple diverse frameworks in a large distributed environment based on resource requirements, system utilization or performance levels in the deployed frameworks.
由于在大型分布式环境中运行的应用程序的多样性,已经开发了许多不同的应用程序框架,例如用于数据密集型批处理作业的MapReduce和用于交互式数据分析的Spark。在初始部署之后,框架开始执行一大批随时间提交的作业。当在大型分布式环境中整合具有时变资源需求的多个此类框架时,基于每个框架的静态资源分配会导致系统利用率低和资源碎片化。我的博士研究目标是通过使用动态资源分配来实现框架之间的有效资源共享,从而提高这种整合环境中的系统利用率和框架性能。我为实现这一目标所做的贡献是设计并实现了一个可扩展的资源管理器,它可以根据已部署框架中的资源需求、系统利用率或性能水平,在大型分布式环境中动态平衡多个不同框架之间的资源。
{"title":"Towards a Resource Manager for Scheduling Frameworks","authors":"Aleksandra Kuzmanovska, R. H. Mak, D. Epema","doi":"10.1109/CCGrid.2016.70","DOIUrl":"https://doi.org/10.1109/CCGrid.2016.70","url":null,"abstract":"Due to the diversity in the applications that run in large distributed environments, many different application frameworks have been developed, such as MapReduce for data-intensive batch jobs and Spark for interactive data analytics. After initial deployment, a framework starts executing a large set of jobs that are submitted over time. When multiple such frameworks with time-varying resource demands are consolidated in a large distributed environment, static allocation of resources on a per-framework basis leads to low system utilization and to resource fragmentation. The goal of my PhD research is to improve the system utilization and framework performances in such consolidated environments by using dynamic resource allocation for efficient resource sharing among frameworks. My contribution towards this goal is a design and an implementation of a scalable resource manager that dynamically balances resources across set of multiple diverse frameworks in a large distributed environment based on resource requirements, system utilization or performance levels in the deployed frameworks.","PeriodicalId":103641,"journal":{"name":"2016 16th IEEE/ACM International Symposium on Cluster, Cloud and Grid Computing (CCGrid)","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-05-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126983601","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
RMA-MT: A Benchmark Suite for Assessing MPI Multi-threaded RMA Performance RMA- mt:一个用于评估MPI多线程RMA性能的基准套件
Matthew G. F. Dosanjh, Taylor L. Groves, Ryan E. Grant, R. Brightwell, P. Bridges
Reaching Exascale will require leveraging massive parallelism while potentially leveraging asynchronous communication to help achieve scalability at such large levels of concurrency. MPI is a good candidate for providing the mechanisms to support communication at such large scales. Two existing MPI mechanisms are particularly relevant to Exascale: multi-threading, to support massive concurrency, and Remote Memory Access (RMA), to support asynchronous communication. Unfortunately, multi-threaded MPI RMA code has not been extensively studied. Part of the reason for this is that no public benchmarks or proxy applications exist to assess its performance. The contributions of this paper are the design and demonstration of the first available proxy applications and micro-benchmark suite for multi-threaded RMA in MPI, a study of multi-threaded RMA performance of different MPI implementations, and an evaluation of how these benchmarks can be used to test development for both performance and correctness.
达到Exascale将需要利用大规模并行性,同时潜在地利用异步通信来帮助在如此高的并发级别上实现可伸缩性。MPI是提供支持如此大规模通信的机制的一个很好的备选方案。现有的两种MPI机制与Exascale特别相关:多线程(支持大规模并发性)和远程内存访问(RMA)(支持异步通信)。不幸的是,多线程MPI RMA代码还没有得到广泛的研究。部分原因是没有公共基准测试或代理应用程序来评估其性能。本文的贡献是为MPI中的多线程RMA设计和演示了第一个可用的代理应用程序和微基准套件,研究了不同MPI实现的多线程RMA性能,并评估了如何使用这些基准来测试开发的性能和正确性。
{"title":"RMA-MT: A Benchmark Suite for Assessing MPI Multi-threaded RMA Performance","authors":"Matthew G. F. Dosanjh, Taylor L. Groves, Ryan E. Grant, R. Brightwell, P. Bridges","doi":"10.1109/CCGrid.2016.84","DOIUrl":"https://doi.org/10.1109/CCGrid.2016.84","url":null,"abstract":"Reaching Exascale will require leveraging massive parallelism while potentially leveraging asynchronous communication to help achieve scalability at such large levels of concurrency. MPI is a good candidate for providing the mechanisms to support communication at such large scales. Two existing MPI mechanisms are particularly relevant to Exascale: multi-threading, to support massive concurrency, and Remote Memory Access (RMA), to support asynchronous communication. Unfortunately, multi-threaded MPI RMA code has not been extensively studied. Part of the reason for this is that no public benchmarks or proxy applications exist to assess its performance. The contributions of this paper are the design and demonstration of the first available proxy applications and micro-benchmark suite for multi-threaded RMA in MPI, a study of multi-threaded RMA performance of different MPI implementations, and an evaluation of how these benchmarks can be used to test development for both performance and correctness.","PeriodicalId":103641,"journal":{"name":"2016 16th IEEE/ACM International Symposium on Cluster, Cloud and Grid Computing (CCGrid)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-05-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128893820","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 25
HPC-Reuse: Efficient Process Creation for Running MPI and Hadoop MapReduce on Supercomputers hpc -重用:在超级计算机上运行MPI和Hadoop MapReduce的高效进程创建
Thanh-Chung Dao, S. Chiba
Hadoop and Spark analytics are used widely for large-scale data processing on commodity clusters. It is better choice to run them on supercomputers in aspects of productivity and maturity rather than developing new frameworks from scratch. YARN, a key component of Hadoop, is responsible for resource management. YARN adopts dynamic management for job execution and scheduling. We identify three Ds (3D) dynamic characteristics from YARN-like management: on-Demand (processes created during job execution), Diverse job, and Detailed (fine-grained allocation). The dynamic management does not fit into typical resource managers on supercomputers, for example PBS, that are identified having three Ss (3S) static characteristics: Stationary (no newly created process during execution), Single job, and Shallow (coarse-grained allocation). In this paper, we propose HPC-Reuse located between YARN-like and PBS-like resource managers in order to provide better support of dynamic management. HPC-Reuse helps avoid process creation, such as MPI-Spawn, and enable MPI communication over Hadoop processes. Our experimental results show that HPC-Reuse can reduce execution time of iterative PageRank by 26%.
Hadoop和Spark分析被广泛用于商品集群上的大规模数据处理。在生产力和成熟度方面,在超级计算机上运行它们比从头开始开发新框架更好。YARN是Hadoop的一个关键组件,负责资源管理。YARN对作业的执行和调度采用动态管理。我们从类似yarn的管理中识别出三个d (3D)动态特征:按需(作业执行期间创建的流程)、多样化作业和详细(细粒度分配)。动态管理不适用于超级计算机(例如PBS)上的典型资源管理器,这些超级计算机具有三个s (3S)静态特征:Stationary(在执行期间没有新创建的进程)、Single job和Shallow(粗粒度分配)。为了更好地支持动态管理,本文提出了HPC-Reuse,它位于类yarn资源管理器和类pbs资源管理器之间。hpc -重用有助于避免进程创建,例如MPI- spawn,并支持在Hadoop进程上进行MPI通信。实验结果表明,HPC-Reuse可将迭代PageRank的执行时间缩短26%。
{"title":"HPC-Reuse: Efficient Process Creation for Running MPI and Hadoop MapReduce on Supercomputers","authors":"Thanh-Chung Dao, S. Chiba","doi":"10.1109/CCGrid.2016.72","DOIUrl":"https://doi.org/10.1109/CCGrid.2016.72","url":null,"abstract":"Hadoop and Spark analytics are used widely for large-scale data processing on commodity clusters. It is better choice to run them on supercomputers in aspects of productivity and maturity rather than developing new frameworks from scratch. YARN, a key component of Hadoop, is responsible for resource management. YARN adopts dynamic management for job execution and scheduling. We identify three Ds (3D) dynamic characteristics from YARN-like management: on-Demand (processes created during job execution), Diverse job, and Detailed (fine-grained allocation). The dynamic management does not fit into typical resource managers on supercomputers, for example PBS, that are identified having three Ss (3S) static characteristics: Stationary (no newly created process during execution), Single job, and Shallow (coarse-grained allocation). In this paper, we propose HPC-Reuse located between YARN-like and PBS-like resource managers in order to provide better support of dynamic management. HPC-Reuse helps avoid process creation, such as MPI-Spawn, and enable MPI communication over Hadoop processes. Our experimental results show that HPC-Reuse can reduce execution time of iterative PageRank by 26%.","PeriodicalId":103641,"journal":{"name":"2016 16th IEEE/ACM International Symposium on Cluster, Cloud and Grid Computing (CCGrid)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-05-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125474053","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
期刊
2016 16th IEEE/ACM International Symposium on Cluster, Cloud and Grid Computing (CCGrid)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1