首页 > 最新文献

Proceedings. IEEE International Conference on Cluster Computing最新文献

英文 中文
Parallel processing of spatial batch-queries using xBR+-trees in solid-state drives 在固态硬盘中使用xBR+-树并行处理空间批处理查询
Pub Date : 2019-11-09 DOI: 10.1007/s10586-019-03013-0
George Roumelis, Polychronis Velentzas, M. Vassilakopoulos, A. Corral, Athanasios Fevgas, Y. Manolopoulos
{"title":"Parallel processing of spatial batch-queries using xBR+-trees in solid-state drives","authors":"George Roumelis, Polychronis Velentzas, M. Vassilakopoulos, A. Corral, Athanasios Fevgas, Y. Manolopoulos","doi":"10.1007/s10586-019-03013-0","DOIUrl":"https://doi.org/10.1007/s10586-019-03013-0","url":null,"abstract":"","PeriodicalId":92128,"journal":{"name":"Proceedings. IEEE International Conference on Cluster Computing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2019-11-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90444729","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
Predicting the Energy-Consumption of MPI Applications at Scale Using Only a Single Node 仅使用单个节点预测大规模MPI应用程序的能耗
Pub Date : 2017-09-05 DOI: 10.1109/CLUSTER.2017.66
F. C. Heinrich, Tom Cornebize, A. Degomme, Arnaud Legrand, Alexandra Carpen-Amarie, S. Hunold, Anne-Cécile Orgerie, M. Quinson
Monitoring and assessing the energy efficiency of supercomputers and data centers is crucial in order to limit and reduce their energy consumption. Applications from the domain of High Performance Computing (HPC), such as MPI applications, account for a significant fraction of the overall energy consumed by HPC centers. Simulation is a popular approach for studying the behavior of these applications in a variety of scenarios, and it is therefore advantageous to be able to study their energy consumption in a cost-efficient, controllable, and also reproducible simulation environment. Alas, simulators supporting HPC applications commonly lack the capability of predicting the energy consumption, particularly when target platforms consist of multi-core nodes. In this work, we aim to accurately predict the energy consumption of MPI applications via simulation. Firstly, we introduce the models required for meaningful simulations: The computation model, the communication model, and the energy model of the target platform. Secondly, we demonstrate that by carefully calibrating these models on a single node, the predicted energy consumption of HPC applications at a larger scale is very close (within a few percents) to real experiments. We further show how to integrate such models into the SimGrid simulation toolkit. In order to obtain good execution time predictions on multi-core architectures, we also establish that it is vital to correctly account for memory effects in simulation. The proposed simulator is validated through an extensive set of experiments with wellknown HPC benchmarks. Lastly, we show the simulator can be used to study applications at scale, which allows researchers to save both time and resources compared to real experiments.
监测和评估超级计算机和数据中心的能源效率对于限制和减少它们的能源消耗至关重要。来自高性能计算(HPC)领域的应用,如MPI应用,占HPC中心总能耗的很大一部分。仿真是研究这些应用程序在各种场景中的行为的一种流行方法,因此,能够在成本效益高、可控且可重复的仿真环境中研究它们的能耗是有利的。遗憾的是,支持HPC应用程序的模拟器通常缺乏预测能耗的能力,特别是当目标平台由多核节点组成时。在这项工作中,我们的目标是通过模拟准确地预测MPI应用的能耗。首先介绍了有意义的仿真所需的模型:目标平台的计算模型、通信模型和能量模型。其次,通过在单个节点上仔细校准这些模型,我们证明了大规模HPC应用的预测能耗与实际实验非常接近(在几个百分点之内)。我们将进一步展示如何将这些模型集成到SimGrid仿真工具包中。为了在多核架构上获得良好的执行时间预测,我们还确定在模拟中正确考虑内存效应是至关重要的。提出的模拟器是通过广泛的实验集与著名的高性能计算基准的验证。最后,我们展示了该模拟器可用于大规模研究应用程序,与实际实验相比,这使研究人员节省了时间和资源。
{"title":"Predicting the Energy-Consumption of MPI Applications at Scale Using Only a Single Node","authors":"F. C. Heinrich, Tom Cornebize, A. Degomme, Arnaud Legrand, Alexandra Carpen-Amarie, S. Hunold, Anne-Cécile Orgerie, M. Quinson","doi":"10.1109/CLUSTER.2017.66","DOIUrl":"https://doi.org/10.1109/CLUSTER.2017.66","url":null,"abstract":"Monitoring and assessing the energy efficiency of supercomputers and data centers is crucial in order to limit and reduce their energy consumption. Applications from the domain of High Performance Computing (HPC), such as MPI applications, account for a significant fraction of the overall energy consumed by HPC centers. Simulation is a popular approach for studying the behavior of these applications in a variety of scenarios, and it is therefore advantageous to be able to study their energy consumption in a cost-efficient, controllable, and also reproducible simulation environment. Alas, simulators supporting HPC applications commonly lack the capability of predicting the energy consumption, particularly when target platforms consist of multi-core nodes. In this work, we aim to accurately predict the energy consumption of MPI applications via simulation. Firstly, we introduce the models required for meaningful simulations: The computation model, the communication model, and the energy model of the target platform. Secondly, we demonstrate that by carefully calibrating these models on a single node, the predicted energy consumption of HPC applications at a larger scale is very close (within a few percents) to real experiments. We further show how to integrate such models into the SimGrid simulation toolkit. In order to obtain good execution time predictions on multi-core architectures, we also establish that it is vital to correctly account for memory effects in simulation. The proposed simulator is validated through an extensive set of experiments with wellknown HPC benchmarks. Lastly, we show the simulator can be used to study applications at scale, which allows researchers to save both time and resources compared to real experiments.","PeriodicalId":92128,"journal":{"name":"Proceedings. IEEE International Conference on Cluster Computing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2017-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75522239","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 36
Parallel and Efficient Sensitivity Analysis of Microscopy Image Segmentation Workflows in Hybrid Systems. 混合系统显微图像分割工作流程的并行高效灵敏度分析。
Pub Date : 2017-09-01 Epub Date: 2017-09-26 DOI: 10.1109/CLUSTER.2017.28
Willian Barreiros, George Teodoro, Tahsin Kurc, Jun Kong, Alba C M A Melo, Joel Saltz

We investigate efficient sensitivity analysis (SA) of algorithms that segment and classify image features in a large dataset of high-resolution images. Algorithm SA is the process of evaluating variations of methods and parameter values to quantify differences in the output. A SA can be very compute demanding because it requires re-processing the input dataset several times with different parameters to assess variations in output. In this work, we introduce strategies to efficiently speed up SA via runtime optimizations targeting distributed hybrid systems and reuse of computations from runs with different parameters. We evaluate our approach using a cancer image analysis workflow on a hybrid cluster with 256 nodes, each with an Intel Phi and a dual socket CPU. The SA attained a parallel efficiency of over 90% on 256 nodes. The cooperative execution using the CPUs and the Phi available in each node with smart task assignment strategies resulted in an additional speedup of about 2×. Finally, multi-level computation reuse lead to an additional speedup of up to 2.46× on the parallel version. The level of performance attained with the proposed optimizations will allow the use of SA in large-scale studies.

我们研究了在高分辨率图像的大型数据集中分割和分类图像特征的算法的高效灵敏度分析(SA)。算法SA是评估方法和参数值变化的过程,以量化输出的差异。SA对计算的要求很高,因为它需要使用不同的参数多次重新处理输入数据集,以评估输出的变化。在这项工作中,我们介绍了通过针对分布式混合系统的运行时优化和重用来自不同参数运行的计算来有效加速SA的策略。我们在具有256个节点的混合集群上使用癌症图像分析工作流来评估我们的方法,每个节点都具有英特尔Phi和双插槽CPU。该算法在256个节点上实现了90%以上的并行效率。通过智能任务分配策略,利用每个节点中可用的cpu和Phi进行协同执行,可获得约2倍的额外加速。最后,在并行版本上,多级计算重用导致了高达2.46倍的额外加速。所提出的优化所达到的性能水平将允许在大规模研究中使用SA。
{"title":"Parallel and Efficient Sensitivity Analysis of Microscopy Image Segmentation Workflows in Hybrid Systems.","authors":"Willian Barreiros,&nbsp;George Teodoro,&nbsp;Tahsin Kurc,&nbsp;Jun Kong,&nbsp;Alba C M A Melo,&nbsp;Joel Saltz","doi":"10.1109/CLUSTER.2017.28","DOIUrl":"https://doi.org/10.1109/CLUSTER.2017.28","url":null,"abstract":"<p><p>We investigate efficient sensitivity analysis (SA) of algorithms that segment and classify image features in a large dataset of high-resolution images. Algorithm SA is the process of evaluating variations of methods and parameter values to quantify differences in the output. A SA can be very compute demanding because it requires re-processing the input dataset several times with different parameters to assess variations in output. In this work, we introduce strategies to efficiently speed up SA via runtime optimizations targeting distributed hybrid systems and reuse of computations from runs with different parameters. We evaluate our approach using a cancer image analysis workflow on a hybrid cluster with 256 nodes, each with an Intel Phi and a dual socket CPU. The SA attained a parallel efficiency of over 90% on 256 nodes. The cooperative execution using the CPUs and the Phi available in each node with smart task assignment strategies resulted in an additional speedup of about 2×. Finally, multi-level computation reuse lead to an additional speedup of up to 2.46× on the parallel version. The level of performance attained with the proposed optimizations will allow the use of SA in large-scale studies.</p>","PeriodicalId":92128,"journal":{"name":"Proceedings. IEEE International Conference on Cluster Computing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2017-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/CLUSTER.2017.28","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"35648091","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
FTS 2016 Workshop Keynote Speech FTS 2016研讨会主题演讲
Pub Date : 2016-01-01 DOI: 10.1109/CLUSTER.2016.98
D. Abramson
Debugging software has always been difficult, with little tool support available. Finding faults in parallel programs is even harder because the machines and problems are so large, and the amount of state to be examined becomes prohibitive. Faults are often introduced when codes are modified, the software or hardware environment changes or they are scaled up to solve larger problems. All too often we hear the programmers scream “It's not my fault!” Over the years we have developed a technique called “Relative Debugging”, in which a code is debugged against another, reference, version. This makes the process simpler because programmers can compare the state of computation between a faulty version and a previous code that is correct, and the programmer doesn't need to have a mental model of what the program state should be. However, relative debugging can also be expensive because it needs to compare large data structures across the machine. Parallel computers offer a way of accelerating the comparisons using parallel algorithms, making the technique practical. In this talk I will introduce relative debugging, show how it assists test and debug, and discuss the various techniques used to scale it up to very large problems and machines. Bio: Professor David Abramson has been involved in computer architecture and high performance computing research since 1979. He has held appointments at Griffith University, CSIRO, RMIT and Monash University. At CSIRO he was the program leader of the Division of Information Technology High Performance Computing Program, and was also an adjunct Associate Professor at RMIT in Melbourne. He served as a program manager and chief investigator in the Co-operative Research Centre for Intelligent Decisions Systems and the Co-operative Research Centre for Enterprise Distributed Systems. He was the Director of the Monash e-Education Centre and a Professor of Computer Science in the Faculty of Information Technology at Monash University. Abramson is currently the Director of the Research Computing Centre at the University of Queensland. He is a fellow of the Association for Computing Machinery (ACM), the Academy of Science and Technological Engineering (ATSE) and the Australian Computer Society (ACS), and a Senior Member of the IEEE. xv 2016 IEEE International Conference on Cluster Computing 2168-9253/16 $31.00 © 2016 IEEE DOI 10.1109/CLUSTER.2016.98 497
调试软件一直都很困难,几乎没有可用的工具支持。在并行程序中发现故障更加困难,因为机器和问题是如此之大,要检查的状态数量变得令人望而却步。当代码被修改、软件或硬件环境发生变化,或者为了解决更大的问题而扩大规模时,通常会引入故障。我们经常听到程序员尖叫:“这不是我的错!”多年来,我们开发了一种称为“相对调试”的技术,在这种技术中,代码是根据另一个参考版本进行调试的。这使得过程更简单,因为程序员可以比较错误版本和之前正确的代码之间的计算状态,并且程序员不需要对程序状态应该是什么有一个心理模型。然而,相对调试也可能代价高昂,因为它需要跨机器比较大型数据结构。并行计算机提供了一种使用并行算法加速比较的方法,使该技术变得实用。在这次演讲中,我将介绍相对调试,展示它如何帮助测试和调试,并讨论用于将其扩展到非常大的问题和机器的各种技术。David Abramson教授自1979年以来一直从事计算机体系结构和高性能计算的研究。他曾在格里菲斯大学、CSIRO、RMIT和莫纳什大学任职。在CSIRO,他是信息技术高性能计算项目部门的项目负责人,也是墨尔本皇家理工学院的兼职副教授。他曾担任智能决策系统合作研究中心和企业分布式系统合作研究中心的项目经理和首席研究员。他曾担任莫纳什电子教育中心主任和莫纳什大学信息技术学院计算机科学教授。艾布拉姆森目前是昆士兰大学研究计算中心的主任。他是计算机协会(ACM)、科学与技术工程学院(ATSE)和澳大利亚计算机协会(ACS)的会员,也是IEEE的高级会员。xv 2016 IEEE国际集群计算会议2168-9253/16 $31.00©2016 IEEE DOI 10.1109/ Cluster .2016.98 497
{"title":"FTS 2016 Workshop Keynote Speech","authors":"D. Abramson","doi":"10.1109/CLUSTER.2016.98","DOIUrl":"https://doi.org/10.1109/CLUSTER.2016.98","url":null,"abstract":"Debugging software has always been difficult, with little tool support available. Finding faults in parallel programs is even harder because the machines and problems are so large, and the amount of state to be examined becomes prohibitive. Faults are often introduced when codes are modified, the software or hardware environment changes or they are scaled up to solve larger problems. All too often we hear the programmers scream “It's not my fault!” Over the years we have developed a technique called “Relative Debugging”, in which a code is debugged against another, reference, version. This makes the process simpler because programmers can compare the state of computation between a faulty version and a previous code that is correct, and the programmer doesn't need to have a mental model of what the program state should be. However, relative debugging can also be expensive because it needs to compare large data structures across the machine. Parallel computers offer a way of accelerating the comparisons using parallel algorithms, making the technique practical. In this talk I will introduce relative debugging, show how it assists test and debug, and discuss the various techniques used to scale it up to very large problems and machines. Bio: Professor David Abramson has been involved in computer architecture and high performance computing research since 1979. He has held appointments at Griffith University, CSIRO, RMIT and Monash University. At CSIRO he was the program leader of the Division of Information Technology High Performance Computing Program, and was also an adjunct Associate Professor at RMIT in Melbourne. He served as a program manager and chief investigator in the Co-operative Research Centre for Intelligent Decisions Systems and the Co-operative Research Centre for Enterprise Distributed Systems. He was the Director of the Monash e-Education Centre and a Professor of Computer Science in the Faculty of Information Technology at Monash University. Abramson is currently the Director of the Research Computing Centre at the University of Queensland. He is a fellow of the Association for Computing Machinery (ACM), the Academy of Science and Technological Engineering (ATSE) and the Australian Computer Society (ACS), and a Senior Member of the IEEE. xv 2016 IEEE International Conference on Cluster Computing 2168-9253/16 $31.00 © 2016 IEEE DOI 10.1109/CLUSTER.2016.98 497","PeriodicalId":92128,"journal":{"name":"Proceedings. IEEE International Conference on Cluster Computing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2016-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82067510","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Letter from the general chair 主席的信
Pub Date : 2013-09-01 DOI: 10.1109/CLUSTER.2013.6702606
Craig Stewart
On behalf of the organizing committee, I am pleased to welcome you to Indianapolis and the 15th IEEE International Conference on Cluster Computing. I hope you enjoy your visit to our beautiful city. Indianapolis has undergone a real renaissance in recent years with many new buildings and an array of new highlights including excellent museums related to culture, the arts, and sports.
我代表组委会,很高兴欢迎您来到印第安纳波利斯参加第15届IEEE国际集群计算会议。我希望你在我们美丽的城市玩得愉快。近年来,印第安纳波利斯经历了一次真正的复兴,有许多新建筑和一系列新的亮点,包括与文化、艺术和体育有关的优秀博物馆。
{"title":"Letter from the general chair","authors":"Craig Stewart","doi":"10.1109/CLUSTER.2013.6702606","DOIUrl":"https://doi.org/10.1109/CLUSTER.2013.6702606","url":null,"abstract":"On behalf of the organizing committee, I am pleased to welcome you to Indianapolis and the 15th IEEE International Conference on Cluster Computing. I hope you enjoy your visit to our beautiful city. Indianapolis has undergone a real renaissance in recent years with many new buildings and an array of new highlights including excellent museums related to culture, the arts, and sports.","PeriodicalId":92128,"journal":{"name":"Proceedings. IEEE International Conference on Cluster Computing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2013-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83314788","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Experiences with hybrid clusters 混合集群的经验
Pub Date : 2009-10-16 DOI: 10.1109/CLUSTR.2009.5289126
D. Jamsek, E. V. Hensbergen
The complexity of modern microprocessor design involving billions of transistors at increasingly denser scales creates many challenges particularly in the area of design reliability and predictable yields. Researchers at IBM's Austin Research Lab have increasingly depended on software based simulation of various aspects of the design and manufacturing process to help address these challenges. The computational complexity and sheer scale of these simulations have lead to the exploration of the application of high-performance hybrid computing clusters to accelerate the design process. Currently, the hybrid clusters in use are composed primarily of commodity workstations and servers incorporating commodity NVIDIA-based GPU graphics cards and TESLA GPU computational accelerators. We have also been experimenting with blade clusters composed of both general purpose servers and PowerXcell accelerators leveraging the computational throughput of the Cell processor. In this paper we will detail our experiences with accelerating our workloads on these hybrid cluster platforms. We will discuss our initial approach of combining hybrid runtimes such as CUDA with MPI to address cluster computation. We will also describe a custom cluster hybrid infrastructure we are developing to deal with some of the perceived shortcomings of MPI and other traditional cluster tools when dealing with hybrid computing environments.
现代微处理器设计的复杂性涉及数十亿个晶体管在越来越密集的尺度上,特别是在设计可靠性和可预测的产量方面带来了许多挑战。IBM Austin Research Lab的研究人员越来越依赖于基于软件的设计和制造过程各个方面的模拟来帮助解决这些挑战。这些模拟的计算复杂性和规模导致了高性能混合计算集群应用的探索,以加速设计过程。目前,使用的混合集群主要由商用工作站和服务器组成,这些工作站和服务器结合了基于nvidia的商用GPU图形卡和TESLA GPU计算加速器。我们还试验了由通用服务器和PowerXcell加速器组成的刀片集群,利用Cell处理器的计算吞吐量。在本文中,我们将详细介绍在这些混合集群平台上加速工作负载的经验。我们将讨论将混合运行时(如CUDA)与MPI相结合以解决集群计算的初始方法。我们还将描述我们正在开发的自定义集群混合基础设施,以解决MPI和其他传统集群工具在处理混合计算环境时存在的一些明显缺点。
{"title":"Experiences with hybrid clusters","authors":"D. Jamsek, E. V. Hensbergen","doi":"10.1109/CLUSTR.2009.5289126","DOIUrl":"https://doi.org/10.1109/CLUSTR.2009.5289126","url":null,"abstract":"The complexity of modern microprocessor design involving billions of transistors at increasingly denser scales creates many challenges particularly in the area of design reliability and predictable yields. Researchers at IBM's Austin Research Lab have increasingly depended on software based simulation of various aspects of the design and manufacturing process to help address these challenges. The computational complexity and sheer scale of these simulations have lead to the exploration of the application of high-performance hybrid computing clusters to accelerate the design process. Currently, the hybrid clusters in use are composed primarily of commodity workstations and servers incorporating commodity NVIDIA-based GPU graphics cards and TESLA GPU computational accelerators. We have also been experimenting with blade clusters composed of both general purpose servers and PowerXcell accelerators leveraging the computational throughput of the Cell processor. In this paper we will detail our experiences with accelerating our workloads on these hybrid cluster platforms. We will discuss our initial approach of combining hybrid runtimes such as CUDA with MPI to address cluster computation. We will also describe a custom cluster hybrid infrastructure we are developing to deal with some of the perceived shortcomings of MPI and other traditional cluster tools when dealing with hybrid computing environments.","PeriodicalId":92128,"journal":{"name":"Proceedings. IEEE International Conference on Cluster Computing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2009-10-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90504882","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
2009 IEEE International Conference on Cluster Computing and Workshops 2009年IEEE集群计算国际会议与研讨会
Pub Date : 2009-08-01 DOI: 10.1109/CLUSTR.2009.5289149
S. Loebman, D. Nunley, YongChul Kwon, B. Howe, M. Balazinska, J. Gardner
As the datasets used to fuel modern scientific discovery grow increasingly large, they become increasingly difficult to manage using conventional software. Parallel database management systems (DBMSs) and massive-scale data processing systems such as MapReduce hold promise to address this challenge. However, since these systems have not been expressly designed for scientific applications, their efficacy in this domain has not been thoroughly tested. In this paper, we study the performance of these engines in one specific domain: massive astrophysical simulations. We develop a use case that comprises five representative queries. We implement this use case in one distributed DBMS and in the Pig/Hadoop system. We compare the performance of the tools to each other and to hand-written IDL scripts. We find that certain representative analyses are easy to express in each engine's highlevel language and both systems provide competitive performance and improved scalability relative to current IDL-based methods.
{"title":"2009 IEEE International Conference on Cluster Computing and Workshops","authors":"S. Loebman, D. Nunley, YongChul Kwon, B. Howe, M. Balazinska, J. Gardner","doi":"10.1109/CLUSTR.2009.5289149","DOIUrl":"https://doi.org/10.1109/CLUSTR.2009.5289149","url":null,"abstract":"As the datasets used to fuel modern scientific discovery grow increasingly large, they become increasingly difficult to manage using conventional software. Parallel database management systems (DBMSs) and massive-scale data processing systems such as MapReduce hold promise to address this challenge. However, since these systems have not been expressly designed for scientific applications, their efficacy in this domain has not been thoroughly tested. In this paper, we study the performance of these engines in one specific domain: massive astrophysical simulations. We develop a use case that comprises five representative queries. We implement this use case in one distributed DBMS and in the Pig/Hadoop system. We compare the performance of the tools to each other and to hand-written IDL scripts. We find that certain representative analyses are easy to express in each engine's highlevel language and both systems provide competitive performance and improved scalability relative to current IDL-based methods.","PeriodicalId":92128,"journal":{"name":"Proceedings. IEEE International Conference on Cluster Computing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2009-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89406090","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Clouds, clusters and ManyCore: The revolution ahead 云、集群和多核心:未来的革命
Pub Date : 2008-10-31 DOI: 10.1109/CLUSTR.2008.4663749
D. Reed
Without doubt, scientific discovery, business practice and social interactions are moving rapidly from a world of homogeneous and local systems to a world of distributed software, virtual organizations and cloud computing infrastructure, all powered by multicore processors and large-scale infrastructure. In science, a tsunami of new experimental and computational data and a suite of increasingly ubiquitous sensors pose vexing problems in data analysis, transport, visualization and collaboration. In society and business, software as a service and cloud computing are empowering distributed groups. Letpsilas step back and think about the longer term future. Where is the technology going and what are the implications? What architectures are appropriate? How to we manage power and scale? What are the right size building blocks? How do we come to grips with the fact that our clusters and data centers are now bigger than the Internet was just a few years ago? How do we develop and support malleable software? What is the ecosystem of components in which distributed, data rich applications will operate? How do we optimize performance and reliability? How do we program these systems?
毫无疑问,科学发现、商业实践和社会互动正在迅速从同质和本地系统的世界转变为分布式软件、虚拟组织和云计算基础设施的世界,所有这些都由多核处理器和大规模基础设施提供支持。在科学领域,新实验和计算数据的海啸,以及一套越来越无处不在的传感器,在数据分析、传输、可视化和协作方面带来了令人烦恼的问题。在社会和商业中,软件即服务和云计算正在增强分布式群体的能力。先退后一步,想想更长远的未来。这项技术的发展方向和影响是什么?什么样的架构是合适的?我们如何管理权力和规模?什么是合适大小的积木?我们如何面对这样一个事实:我们的集群和数据中心现在比几年前的互联网还要大?我们如何开发和支持可延展性软件?分布式、数据丰富的应用程序将运行的组件生态系统是什么?我们如何优化性能和可靠性?我们如何对这些系统进行编程?
{"title":"Clouds, clusters and ManyCore: The revolution ahead","authors":"D. Reed","doi":"10.1109/CLUSTR.2008.4663749","DOIUrl":"https://doi.org/10.1109/CLUSTR.2008.4663749","url":null,"abstract":"Without doubt, scientific discovery, business practice and social interactions are moving rapidly from a world of homogeneous and local systems to a world of distributed software, virtual organizations and cloud computing infrastructure, all powered by multicore processors and large-scale infrastructure. In science, a tsunami of new experimental and computational data and a suite of increasingly ubiquitous sensors pose vexing problems in data analysis, transport, visualization and collaboration. In society and business, software as a service and cloud computing are empowering distributed groups. Letpsilas step back and think about the longer term future. Where is the technology going and what are the implications? What architectures are appropriate? How to we manage power and scale? What are the right size building blocks? How do we come to grips with the fact that our clusters and data centers are now bigger than the Internet was just a few years ago? How do we develop and support malleable software? What is the ecosystem of components in which distributed, data rich applications will operate? How do we optimize performance and reliability? How do we program these systems?","PeriodicalId":92128,"journal":{"name":"Proceedings. IEEE International Conference on Cluster Computing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2008-10-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85360213","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Designing next generation clusters with InfiniBand and 10GE/iWARP: Opportunities and challenges 设计InfiniBand和10GE/iWARP的下一代集群:机遇与挑战
Pub Date : 2008-10-31 DOI: 10.1109/CLUSTR.2008.4663772
D. Panda
Clusters with commodity multi-core processors and commodity networking technologies are providing cost-effective solutions for building next generation high-end systems including HPC clusters, servers, parallel file systems and multi-tier data-centers. The talk focus on two emerging networking technologies (InfiniBand and 10 GE/iWARP) and their associated protocols for designing such systems. In this talk, we critically examine the current and future trends of these technologies and their applicability for designing next generation petascale clusters. The talk start with the motivations behind these technologies and then focus on their architectural aspects and applicability to SAN, LAN and WAN-based clusters. Designing next generation clusters with high performance, scalability and RAS (reliability, availability and serviceability) capabilities by using these technologies will be examined. Current and future trends of InfiniBand and iWARP products was highlighted. The emerging OpenFabrics software stack, focusing both these technologies in an integrated manner, was presented. Finally, a set of case studies in designing various clusters with these networking technologies was presented to outline the associated opportunities and challenges.
具有商用多核处理器和商用网络技术的集群为构建下一代高端系统(包括HPC集群、服务器、并行文件系统和多层数据中心)提供了经济高效的解决方案。本次演讲的重点是两种新兴的网络技术(InfiniBand和10ge /iWARP)及其设计此类系统的相关协议。在这次演讲中,我们将严格审查这些技术的当前和未来趋势,以及它们在设计下一代千兆级集群时的适用性。演讲从这些技术背后的动机开始,然后集中讨论它们的架构方面和对基于SAN、LAN和wan的集群的适用性。通过使用这些技术来设计具有高性能、可伸缩性和RAS(可靠性、可用性和可服务性)能力的下一代集群将被研究。强调了InfiniBand和iWARP产品的当前和未来趋势。介绍了新兴的OpenFabrics软件栈,以集成的方式集中了这两种技术。最后,介绍了一组使用这些网络技术设计各种集群的案例研究,概述了相关的机遇和挑战。
{"title":"Designing next generation clusters with InfiniBand and 10GE/iWARP: Opportunities and challenges","authors":"D. Panda","doi":"10.1109/CLUSTR.2008.4663772","DOIUrl":"https://doi.org/10.1109/CLUSTR.2008.4663772","url":null,"abstract":"Clusters with commodity multi-core processors and commodity networking technologies are providing cost-effective solutions for building next generation high-end systems including HPC clusters, servers, parallel file systems and multi-tier data-centers. The talk focus on two emerging networking technologies (InfiniBand and 10 GE/iWARP) and their associated protocols for designing such systems. In this talk, we critically examine the current and future trends of these technologies and their applicability for designing next generation petascale clusters. The talk start with the motivations behind these technologies and then focus on their architectural aspects and applicability to SAN, LAN and WAN-based clusters. Designing next generation clusters with high performance, scalability and RAS (reliability, availability and serviceability) capabilities by using these technologies will be examined. Current and future trends of InfiniBand and iWARP products was highlighted. The emerging OpenFabrics software stack, focusing both these technologies in an integrated manner, was presented. Finally, a set of case studies in designing various clusters with these networking technologies was presented to outline the associated opportunities and challenges.","PeriodicalId":92128,"journal":{"name":"Proceedings. IEEE International Conference on Cluster Computing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2008-10-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82353436","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Improving system efficiency through scheduling and power management 通过调度和电源管理提高系统效率
Pub Date : 2007-09-17 DOI: 10.1109/CLUSTR.2007.4629271
Ryan E. Grant, A. Afsahi
The performance of the emerging commercial chip multithreaded multiprocessors is of great importance to the high performance computing community. However, the growing power consumption of such systems is of increasing concern, and techniques that could be effectively used to increase overall system power efficiency while sustaining performance are very desirable.
新兴的商用芯片多线程多处理器的性能对高性能计算界具有重要意义。然而,这类系统不断增长的功率消耗引起了越来越多的关注,能够有效地用于提高整体系统功率效率同时保持性能的技术是非常可取的。
{"title":"Improving system efficiency through scheduling and power management","authors":"Ryan E. Grant, A. Afsahi","doi":"10.1109/CLUSTR.2007.4629271","DOIUrl":"https://doi.org/10.1109/CLUSTR.2007.4629271","url":null,"abstract":"The performance of the emerging commercial chip multithreaded multiprocessors is of great importance to the high performance computing community. However, the growing power consumption of such systems is of increasing concern, and techniques that could be effectively used to increase overall system power efficiency while sustaining performance are very desirable.","PeriodicalId":92128,"journal":{"name":"Proceedings. IEEE International Conference on Cluster Computing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2007-09-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87591797","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
期刊
Proceedings. IEEE International Conference on Cluster Computing
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1